All proposals

Partial Differential Equations is All You Need for Generating Neural Architectures — A Theory for Physical Artificial Intelligence Systems

In this work, we generalize the reaction-diffusion equation in statistical physics, Schrödinger equation in quantum mechanics, Helmholtz equation in paraxial optics into the neural partial differential equations (NPDE), which can be considered as the fundamental equations in the field of artificial intelligence research

Federated Quantum Machine Learning

We present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model.

Classification based on Topological Data Analysis

Topological Data Analysis (TDA) is an emergent field that aims to discover topological information hidden in a dataset. TDA tools have been commonly used to create filters and topological descriptors to improve Machine Learning (ML) methods. This paper proposes an algorithm that applies TDA directly to multi-class classification problems, even imbalanced datasets, without any further ML stage

Hugging Face datasets

One-line dataloaders for many public datasets & Efficient data pre-processing

Probabilistic Machine Learning: An Introduction

“In this book, we will cover the most common types of ML, but from a probabilistic perspective. Roughly speaking, this means that we treat all unknown quantities (e.g., predictions about the future value of some quantity of interest, such as tomorrow’s temperature, or the parameters of some model) as random variables, that are endowed with probability distributions which describe a weighted set of possible values the variable may have.[…].”.

S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training

We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary.

Multi-Image Steganography Using Deep Neural Networks

Steganography is the science of hiding a secret message within an ordinary public message. Over the years, steganography has been used to encode a lower resolution image into a higher resolution image by simple methods like LSB manipulation. We aim to utilize deep neural networks for the encoding and decoding of multiple secret images inside a single cover image of the same resolution.

Addressing Ethical Dilemmas in AI: Listening to Engineers

Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems. Spaces for discussion of ethics are lacking and very necessary both internally in companies and externally, provided by independent organisations. Looking to policy ensuring whistleblower protection and ombudsman position within companies, as well as participation from professional organisations. One solution is to look to the existing EU RRI framework and to ensure multidisciplinarity in AI system development team composition. The RRI framework can provide systematic processes for engagement with stakeholders and ensuring that problems are better defined. The challenges of AI systems point to a general lack in engineering education. We need to ensure that technical disciplines are empowered to identify ethical problems, which requires broadening technical education programs to include societal concerns. Engineers advocate for public transparency of adherence to standards and ethical principles for AI-driven products and services to enable learning from each other’s mistakes and to foster a no-blame culture.

El arte de la Inteligencia Artificial desde una perspectiva léxica

El principal objetivo de este documento es construir un glosario, a partir de las propuestas léxicas realizadas por los diferentes entes tecnológicos (ISO, IEEE, Wikipedia y Oxford University Press). Adicionalmente, el glosario estará estructurado según las ramas de conocimiento de esta área de trabajo, determinando exhaustiva y detalladamente las características de los términos que se incluirán en él para así facilitar una lectura amigable a la par que eficiente al usuario.

Explainability in Graph Neural Networks: A Taxonomic Survey

We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

Data Science: A First Introduction

The book is structured so that learners spend the first four chapters learning how to use the R programming language and Jupyter notebooks to load, wrangle/clean, and visualize data, while answering descriptive and exploratory data analysis questions. The remaining chapters illustrate how to solve four common problems in data science, which are useful for answering predictive and inferential data analysis questions[…]

Yann LeCun’s Deep Learning Course at CDS

This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. The prerequisites include: DS-GA 1001 Intro to Data Science or a graduate-level machine learning course.

Bayesian Data Analysis: book & course

This book is intended to have three roles and to serve three associated audiences: an introductory text on Bayesian inference starting from first principles, a graduate text on effective current approaches to Bayesian modeling and computation in statistics and related fields, and a handbook of Bayesian methods in applied statistics for general users of and researchers in applied statistics. Although introductory in its early sections, the book is definitely not elementary in the sense of a first text in statistics

Fourier Neural Operator for Parametric Partial Differential Equations

The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution.

Evaluating and Characterizing Human Rationales

Two main approaches for evaluating the quality of machine-generated rationales are: 1) using human rationales as a gold standard; and 2) automated metrics based on how rationales affect model behavior.

Probabilistic Machine Learning for Healthcare

Machine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.

WHAT IS [MEANINGFUL PRIVACY]*

Meaningful privacy and how it is applied in technology will be the focus of 60 privacy preserving leaders from around the globe during the OpenMined Privacy conference Sept 26 and 27 2020 with more than 2000 in attendance virtually.

Tidy Modeling with R

This book provides an introduction to how to use our software to create models. We focus on a dialect of R called the tidyverse that is designed to be a better interface for common tasks using R. If you’ve never heard of or used the tidyverse, Chapter 2 provides an introduction. In this book, we demonstrate how the tidyverse can be used to produce high quality models. The tools used to do this are referred to as the tidymodels packages

Report on Publications Norms for Responsible AI

The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity.

The future of AI

If you wonder what is next in the evolution towards general AI then this session is for you. We have seen some painful failures of artificial intelligence pointing to a lack of ‘common sense’. Are neural networks really the solution we seek or is a new path needed? Find out what IBM Research is cooking in terms of hardware and software in the never ending quest towards General AI.

Machine Learning from scratch (by Danny Friedman)

This book covers the building blocks of the most common methods in machine learning. This set of methods is like a toolbox for machine learning engineers. Those entering the field of machine learning should feel comfortable with this toolbox so they have the right tool for a variety of tasks.

COCIR analyses application of medical device legislation to Artificial Intelligence

“The European Commission has shown its ambition in the area of artificial intelligence (AI) in its recent White Paper on Artificial Intelligence – a European approach to excellence and trust. This White Paper is at the same time a precursor of possible legislation of AI in products and services in the European Union. However, COCIR sees no need for novel regulatory frameworks for AI-based devices in Healthcare, because the requirements of EU MDR and EU IVDR in combination with GDPR are adequate to ensure that same excellence and trust.” (COCIR paper).

IEEE Use Case–Criteria for Addressing Ethical Challenges in Transparency, Accountability, and Privacy of CTA/CTT

There are substantial public health benefits gained through successfully alerting individuals and relevant public health institutions of a person’s exposure to a communicable disease. Contact tracing techniques have been applied to epidemiology for centuries, traditionally involving a manual process of interview and follow-up. This is time-consuming, difficult, and dangerous work. Manual processes are also open to incomplete information because they rely on individuals being willing and able to remember and report all contact possibilities.

Loading…

Something went wrong. Please refresh the page and/or try again.

Share this on:
Exit mobile version