By continuing to use our site, you consent to the processing of cookies, user data (location information, type and version of the OS, the type and version of the browser, the type of device and the resolution of its screen, the source of where the user came from, from which site or for what advertisement, language OS and Browser, which pages are opened and to which buttons the user presses, ip-address) for the purpose of site functioning, retargeting and statistical surveys and reviews. If you do not want your data to be processed, please leave the site.

Managing the Risks from AI Algorithms

Replica Analytics November Webinar Summary

Panel Members: Daniel Shapiro, Lemay.ai; Khaled El Emam, Replica Analytics


In the webinar organized by Replica Analytics in November 2019 entitled "Managing the Risks from AI Algorithms," Daniel Shapiro and Khaled El Emam discussed some of the risks in developing and using AI tools, with an emphasis on healthcare applications.

The discussion focused on three main areas:

  1. security and privacy risks,
  2. ethical and trust risks, and
  3. implementation risks.

Daniel Shapiro warns against the blind use of AI tools and the problem of not realizing that there is, or could be a problem.

As with any technology, you have to be aware of risks and potential issues to be able to detect if you are experiencing an issue. Research in this area has exposed some possible attacks and ways in which an AI (or machine learning) model can be manipulated to leak information or to produce inaccurate results. Checks are needed to ensure that security and privacy risks are being detected and properly managed. 

Ethical and trust risks can center on the fact that AI models contain biases. Because they learn on past information, it is possible that historical training data contains biases that will be replicated in the model if not controlled for at the outset. In the academic realm, ethics review boards and peer review systems ensure that AI research is being conducted in an ethical manner. In corporate and other settings, such mechanisms may not be available. Having the tools in place to verify the model, and having third party audits to perform cross-validation, is essential to ensure the model is performing in an ethical way, and the output can be trusted. However, it is possible that some biases will not be able to be corrected. In those cases, transparency is key to make users aware of the model's limitations.

Data is a significant concern when looking at implementation risks. Data quality is always an important consideration when building and training a model, and the amount of data available can have a significant impact on performance. The wrong extrapolation can occur when the model is making significant conclusions based on a small amount of data. If a model is sensitive to small changes, this can also lead to big problems in implementation. A model must be robust to the little changes that exist in real life, or its use will not be practical in a real-world setting.

Key takeaways of the discussion were:

  • Be aware of security and privacy risks and potential attacks
  • Need to test for biases and ensure these are accounted for
  • Ethics review needed to ensure the model does not discriminate or produce unethical results
  • Know your data to determine if the results are correct/make sense
  • Model robustness - should be able to handle small changes - testing for edge cases
  • Be transparent about the model’s limitations and communicate information about any biases that you could not control for
  • Customize the model to solutions

This webinar was part of a series of presentations that Replica Analytics is organizing to create greater awareness of the technical, social, and legal implications of using data for secondary purposes.

You can find the slide deck from this webinar here, and the video recording here. Information about our other webinars can be found in our knowledgebase.