| |March 20219precision you strike on day one versus the eventual goal post. The partnership should lead to choosing a use case that gives a head start to their AI journey and yet qualify to be a significant one from a busi-ness impact standpoint.Invest in the Evolution of an AI ModelAI is no magic, and the outcome of implementing an AI model has a direct correlation to the underly-ing data that has gone into training it. Building an AI model involves eternal iterations, and the out-come only gets better with new or more data over time. Do not expect human parity on day one. Busi-nesses need to invest in the evolution of the model that may take umpteen number of iterations, even before it reaches an acceptable level of accuracy and precision.Let's take a real-world example. A Technical Ar-chitect (TA) leads the engagements with customers. By virtue of hearing and dealing with so many cus-tomer challenges, they get better each day in their ability to operate as trusted advisors. A TA's knowl-edge graph gets updated daily with new learnings and they keep getting better to help our customers. Similarly, an AI model has an evolution journey too. Invest in it. Set Guardrails for Responsible AIFrom a data science lifecycle point of view, when you choose a use case, you either realize that you do not have an apt dataset or you probably have taken on an area where technology is still evolving. Either way, organizations will fail fast or end up building an AI model with certain accuracy and precision.But even before an organization starts building an AI plan and gets aspirational in infusing AI into their suite of applications, an element that should be the backbone of overall design thinking is related to principles of ethics and responsibility. Every orga-nization needs to put into place guardrails on how it will develop and deploy AI models, and more im-portantly, the impact they will have on individuals. At Microsoft, we have identified six principles for responsible AI that guide the development and use of AI with people at the center. These are - fair-ness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Organizations may develop their own principles according to the nature of their business, but the guiding principles will ensure that their AI models are trustworthy. Establish TrustThis is where one needs to understand that imple-menting a model into production alone is not suc-cess, but ensuring the outcome adheres to the design principles indicated above. Over a period of time, this will ensure that users can trust the predictions these models generate, and it should be an eventual goal post for any business.For example, imagine a model in the healthcare domain, where the system predicts the outcome of a patient's health check. For the care team to establish trust with the system, there have to be minimum `false positives' and there needs to be a feedback loop for the care team to improvise the system. Hence, es-tablishing `trust' is critical on the trail of assessing AI impact for a use case.To conclude, implementing AI in an organization needs business and technology leaders to invest in defining an operating manifesto to uplift the spirit of responsible AI. Success on this path will do a lot of good in instilling trust in the system. It is also im-perative that one be realistic and practical about the expected outcome, and think long term. FROM A DATA SCIENCE LIFECYCLE POINT OF VIEW, WHEN YOU CHOOSE A USE CASE, YOU EITHER REALIZE THAT YOU DO NOT HAVE AN APT DATASET OR YOU PROBABLY HAVE TAKEN ON AN AREA WHERE TECHNOLOGY IS STILL EVOLVING
< Page 8 | Page 10 >