Aditi’s Technology And AI ethics policy framework is based on International Standards ensuring our products, projects and activities are human centric and ethical. We follow IEEE standards and frameworks to maintain our trustworthiness in the advancement and realization of AI systems. We aim at designing, developing and operating in a way that is beneficial to people and the environment, beyond simply reaching functional goals and addressing technical problems. Ultimately, our goal is to practice principles that define technology that benefits human well-being, both at the individual and collective level. Accordingly, we have developed ethical principles for best practice use of AI that are focused on trust and transparency, high quality outcomes and customer benefit.
Our Ethics policy statement applies to all our stakeholders from our employees across hierarchy and users at different levels. We are transparent about our ethics policy principles to assist with implementation of our project, process, product and solutions.
At the organizational level, we have zero tolerance for violations of our ethical policy. We have policies in place for addressing Data Privacy Process, Ethical Concerns during System Design, Transparency of Autonomous Systems, Standard for Data Privacy Process, Transparent Employer Data Governance, Ethically Driven Robotics and Automation Systems, Fail-Safe Design of Autonomous and Semi-Autonomous Systems, Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems.
We practice our ethics principles to ensure we design technology systems that empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms are in place which help us in achieving high quality systems through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
Aditi is transparent in the data, system and AI business models.. Traceability mechanisms are in place for achieving this. Our technology products and AI systems and their decisions are explained in a manner adapted to the stakeholders involved. At any point, when a user is interacting with our AI systems, they are made aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
We have mechanisms in place to ensure responsibility and accountability for our products, data and AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress is ensured.
Diversity, non-discrimination and fairness is practiced with zero tolerance at all levels. We practice in all human possible ways to avoid unfair bias, marginalization of vulnerable groups. There is no place for prejudice and discrimination against any of the stakeholders, across rank and file, including people we hire to our end users accessing our systems. We foster diversity and our AI systems would be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
At Aditi, we strive to ensure technology products and AI systems are built to be resilient and secure. We take all necessary steps guided by the policy frameworks to ensure they are safe, a robust, iteratively improved fall back plan is in place in case something goes wrong, as well as being accurate, reliable and reproducible. We proactively ensure that unintentional harm is minimized and prevented.
We treat data as an asset by identifying the data that matters for delivering better customer outcomes, governing and managing it effectively across the data lifecycle, and using and sharing it across government and as open data to generate insights that support decision making and innovation.
Aditi will ensure that our foundational technology and AI solutions developed are trusted by the public, meet the highest ethical and assurance standards, are clearly focussed on customer needs, and carefully manage potential risks.AI will not be used where there is not a clear use case for doing so, or where its use might pose risks in relation to data, privacy or assurance.
The best use of AI will depend on data quality and relevant data. It will also rely on careful data management to ensure potential data biases are identified and appropriately managed. AI solutions that rely on sub-optimal quality data may result in sub-optimal project outcomes and recommendations. Algorithms that contain systemic and repeatable errors may lead to prejudiced decisions or outcomes.
Our Projects would clearly demonstrate:
Our ethics policy is for best practice use of AI that are focused on trust and transparency, high quality outcomes and customer benefit. This approach is informed by the extensive work on AI in other jurisdictions within the United States non-government organizations. It has also been iteratively developed in consultation with the technology community, non-government organizations, government agencies, academia and with our industry peers.
Our ethics policy framework are designed to:
Not only must our client have high levels of assurance that data is being used safely and in accordance with relevant legislation, they must also have access to an efficient and transparent review mechanism if there are questions about the use of data or AI-informed outcomes. The development of AI solutions must be robust technically, legally and ethically. The wider community should be engaged on the objectives of
AI projects and insights into data use and methodology would be made publicly available unless there is an overriding public interest in not doing so.
The potential benefits of AI are significant, but its use needs to be managed carefully to ensure risks are controlled and unintended outcomes avoided. The ethics policy has been designed for this purpose.
IEEE Standard Model Process for Addressing Ethical Concerns during System Design
IEEE Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems