Our Principles
We maximize algorithmic performance subject to uncompromising standards of ethics and transparency. Machine learning and artificial intelligence pose serious challenges to the way we operate and take decisions, and there are no easy answers. Rather than resist change, we should embrace the opportunity and face those challenges. Below, we list the 4 principles that guide our work.
Interpretability
We make the rationale behind our algorithmic recommendations as transparent as possible
How do we approach it? First, we speak to clients to understand their standards of interpretability. For some prediction problems, the standards are higher than for others. Second, we select a model that has an appropriate level of interpretability, and openly discuss whether there is a trade-off in terms of predictive power. While we generally prefer interpretable models with few inputs, a substantial increase in predictive power sometimes justifies lower interpretability. Third, we build features that actively increase interpretability, such as the list of features that led the model to make a prediction (this can be behavioral or demographic data for a beneficiary).
Fairness
We make sure our algorithmic recommendations are aligned with our clients' fairness principles
How do we approach it? First, we engage in a discussion on fairness principles with our clients. Our employees are trained in philosophy and social sciences, and are familiar with discussions around ethics and social dynamics. Because data science is agnostic with respect to ethical norms, it is important to us to understand the values of our clients. Second, we evaluate the dataset used to train algorithms to understand potential sources of bias. Third, we consider options for building in ethical values into the design of predictive models. For example, models can be built to actively ignore gender, in order to create more equal outcomes for both genders. This involves honest discussions about limitations this imposes on predictive power. Third, we test what the implications of the models recommendations are: for example, what recommendations does the model generate for subgroups, and how accurate is the model for them.
Privacy
We maintain the full privacy of our clients by adhering to the highest standards of cybersecurity and anonymization
How do we approach it? First, we minimize the use of sensitive data where possible. Second, we ensure that the owners of data (sometimes the beneficiaries themselves) have consented, where such consent is mandated. This sometimes means asking challenging questions to our clients. Third, we always encrypt data, whether at rest or in transit. Fourth, we insist on anonymisation of data (removing unique identifiers that can be traced to individuals and removing or altering data fields that lead back to individuals (e.g. precise location data).
Control
We make room for human judgment and decision making where appropriate
How do we approach it? First, we discuss our clients needs with respect to control for the particular context. Some decisions can be automated without the need for a layer of human control (e.g. predicting machine maintenance needs) while others are more sensitive (e.g. predicting whether a student will drop out of their program). Second, we identify the critical joints in operations where human decision-making is required. Third, we build features into our models that ensure automatic checks and balances (e.g. by requiring explicit permission of humans before implementing a recommendation).