top of page
Search
  • Writer's pictureTobias Straube

Explainability of AI output: Revisiting our understanding of morality

Updated: Oct 8, 2018

DARPA researchers want to know how and why machine learning algorithms get it wrong.

DARPA researchers want to know how and why machine learning algorithms get it wrongCertainly, there are some very detrimental consequences of inexplicability. What is the morality of in/transparency? In which domains do we need public or multi-stakeholder transparency to mitigate errors, biases, abuse or manipulation, etc.? Conversely, are there areas where it's justifiable that explanatory functions should be in the hands of only one party? - Public health decisions for which mass panics are a risk, individual or group identity issues where psychological tensions could arise, competitive decisions in corporate strategy, national security or intel analysis, crime investigations, international trade negotiations, dating / trading / shopping agents, personal bouncer bots, etc?  Are the rules for the AI-age different than in pre-AI era, because it penetrates our lives more deeply, pervasively, and with more 2nd and 3rd order unintended consequences?

Check the full article here.

51 views0 comments
bottom of page