Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS.
Ética | 🇬🇧 Ethics
Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems. Spaces for discussion of ethics are lacking and very necessary both internally in companies and externally, provided by independent organisations. Looking to policy ensuring whistleblower protection and ombudsman position within companies, as well as participation from professional organisations. One solution is to look to the existing EU RRI framework and to ensure multidisciplinarity in AI system development team composition. The RRI framework can provide systematic processes for engagement with stakeholders and ensuring that problems are better defined. The challenges of AI systems point to a general lack in engineering education. We need to ensure that technical disciplines are empowered to identify ethical problems, which requires broadening technical education programs to include societal concerns. Engineers advocate for public transparency of adherence to standards and ethical principles for AI-driven products and services to enable learning from each other’s mistakes and to foster a no-blame culture.
The book focuses on machine learning models for tabular data (also called relational or structured data) and less on computer vision and natural language processing tasks. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.
Este congreso, motivado por la creciente sensibilidad de las compañías en materia de Gobierno, Riesgo y Cumplimiento, se enfoca en generar una visión global de los procesos, gestión de riesgos, fraude, control interno y cumplimiento normativo y legislativo, sin dejar de lado la metodología y ejecución de revisiones y auditorías de los mismos
The starting point to develop the operational definition is the definition of AI adopted by the High Level Expert Group on artificial intelligence. To derive this operational definition we have followed a mixed methodology. On one hand, we apply natural language processing methods to a large set of AI literature. On the other hand, we carry out a qualitative analysis on 55 key documents including artificial intelligence definitions from three complementary perspectives: policy, research and industry.
The TIMESTORM consortium, funded by the EU’s Future and Emerging Technologies (FET) programme, has transformed the notion of time perception in artificial intelligence from an immature, poorly defined subject into a promising new research strand, drawing on diverse expertise in psychology and neurosciences as well as robotics and cognitive systems.