Building Responsible AI: From Research to Real-World Impact in Healthcare & Finance

apple
spotify
partner-share-lg

About The Host

Dr. Swati Tyagi is an AI and machine learning professional with experience across research and industry, focused on building reliable, production-ready AI and large language model systems. Her work centers on AI evaluation, governance, and responsible deployment at scale. She is also an active speaker, known for bridging advanced AI capabilities with real-world enterprise needs.

About The Episode

As generative AI becomes more powerful and widely adopted, issues like hallucinations, bias, and unsafe deployment are becoming harder to ignore, especially in high-stakes industries like healthcare and financial services. How can organizations build AI systems that are innovative, yet reliable and responsible?
Dr. Swati Tyagi, an AI and machine learning professional with experience across research and enterprise, explains why evaluation, governance, and guardrails are essential for real-world AI. She breaks down how hallucinations occur, where bias is introduced during model training, and why GenAI must follow the same rigor as traditional machine learning before reaching production.
In this episode of Lessons From The Leap, Ghazenfer Mansoor sits down with Dr. Swati Tyagi to discuss responsible AI, LLMs, RAG, Graph RAG, and the challenges of moving AI from experimentation to production. Dr. Swati also shares advice for young professionals, especially women, looking to build meaningful careers in AI.

What You Will Learn
Quotable Moments:
Action Steps:
  1. Evaluate before you automate: Treat AI evaluation as a prerequisite, not an afterthought. Structured testing, validation, and monitoring prevent hallucinations and unsafe behavior once systems reach production.
  2. Address bias at the foundation, not the surface: Bias is introduced during training, not just in outputs. Auditing data sources and feature selection early reduces unfair outcomes that are costly to fix later.
  3. Apply GenAI where it adds real value: Use large language models for unstructured problems like summarization and document understanding, while relying on traditional models for structured and causal decision-making.
  4. Ground AI systems in trusted data: Retrieval-augmented approaches, including Graph RAG, anchor AI responses in enterprise knowledge, improving accuracy, traceability, and confidence.
  5. Build governance into the lifecycle: Responsible AI requires guardrails, security controls, and continuous oversight. Treat governance as an ongoing operational responsibility, not a one-time compliance step.
Sponsor for this episode...

This episode is brought to you by Technology Rivers, where we revolutionize healthcare and AI with software that solves industry problems.

We are a software development agency that specializes in crafting affordable, high-quality software solutions for startups and growing enterprises in the healthcare space.

Technology Rivers harnesses AI to enhance performance, enrich decision-making, create customized experiences, gain a competitive advantage, and achieve market differentiation. 

Interested in working with us? Go to https://technologyrivers.com/ to tell us about your project.

Transcript

[00:00:15] Ghazenfer: Hello and welcome to Lessons from the Leap. I’m your host, Ghazenfer Mansoor. On this show, I get to sit down with entrepreneurs, founders, business leaders, technical leaders, talk about bold decisions, pivotal moments, and innovative ideas that shape their journey. This episode is brought to you by Technology Rivers.

At Technology Rivers, we bring innovation through technology and AI to solve real world industry problems. If you’d like to learn more about us. Head over to technology rivers.com and tell us more about your project. Today we have Dr. Swati Tyagi, an AI and machine learning professional with experience across research and industry with us.

Dr. Swati, can you introduce yourself, tell us more about your journey? What are you currently working on? This is your time to introduce yourself. Tell us, tell anything that you want our audience to know about you. And then we’ll dig deeper into.

[00:01:11] Dr. Swati : Yeah. Thank you Mr. Mansoor for having me. It’s wonderful to be here and I really appreciate you. Starting with my introduction, I have done PhD in financial service analytics with focus in AI, ML, and GenAi. And I will talk about more about my research and how I started my PhD journey. But the thing is, before that I did Master’s from IT Delhi and Bachelor’s in Computer Science and Engineering from back in India.

So if I give some highlights about my personal side to begin with, I was born and raised in India. In a family of daughters, and I say that deliberately because in many part of India, even today, being born a girl comes with a different set of expectations. There is often this unspoken pressure to keep your ambition modest, to fit into certain role, and not to ask for too much.

But my parents didn’t believe in that narrative. Not for a second. They fought sometimes quietly and sometimes even visibly to make sure that their daughter had access to education opportunity and to the belief that we could be anything we wanted. And especially my mother, she taught us that circumstances don’t define capability.

That shaped everything, how we see the world. Now coming to when I came to the United States for my PhD, I experienced something transformative in the intellectual freedom that I got here. That’s something phenomenal. Here ideas matter more than the background. Questions were encouraged, not suppressed. The environment allows me to dream bigger, ask harder questions and ultimately pursue research focused on AI system, machine learning that actually serve people and not just, you know, impressed by using fancy terminologies.

[00:02:59] Ghazenfer: Awesome, awesome. Welcome to the American Dream. That’s what you come here for. So thanks for sharing that. So, your career spans research enterprise AI leadership roles. So what initially drew you to AI and what were the key moments that shaped your journey from research to large scale deployment? 

[00:03:22] Dr. Swati : So when I started, like early in my career when I was working as software engineer, then when I did my bachelor’s in computer science, I started my career as a software engineer. Then with time I see that. I need to do masters in business administration and technology management to grow up in the ladder. And then I worked in a strategy planning analytics domain and work on various analytics projects. During that time I realized that I enjoy the analytics thing and I need to dig in deeper.

And I did my research that AI was going, there was a boom, like data scientist boom. Now it’s. A clear boom. But even, six years back, data scientists were really respected for the work they used to do. And then I made up my mind that I need to dig deeper into this, and I decided to pursue either master’s or PhD.

I applied to many couple of universities based on my interest. I was not applying just normal like computer science. Pure computer science or pure MBA thing. I was looking something intersectional which gives me to work on real problems that I can solve. And I will talk more about it.

But from there onwards, I applied and I got into University of Delaware, which is in their PhD program. Which is financial service analytics. And it gave me the platform to work on various kinds of problems, like credit risk modeling. It gave me the problem to work on redlining data and to see that there are hidden biases based on the zip code.

And then finally I worked on national language processing to understand how word embedding works and how the association and biases work internally in the latent connections. 

[00:05:13] Ghazenfer: Cool. So you talked about biasness. So biasness hallucination these are the common terminologies we keep hearing 

[00:05:23] Dr. Swati : Yeah.

[00:05:23] Ghazenfer: In Ai. So how do you resolve that? Let’s talk one at a time. So let’s talk first about hallucination, which is very common. Like you ask anything to ChatGPT, they’ll always answer. Even if ChatGPT doesn’t know, that’s one of those ideal people who always will always appreciate you and give you answers, even if it’s wrong but obviously that creates a much bigger.

So tell our audience what is hallucination? Why does that happen, and what can we do, or what the companies who are building the solutions can do to. Reduce that hallucination problem. I know I combine all three questions in one.

[00:06:10] Dr. Swati : Right, so, these days we heard the term LLM GenAi, but let me explain the basis of the LLM. LLM basically the large language model, they are trained on billions of parameters and by billions of parameters, I say that they are trained on large internet data that’s available to us. And you can imagine how much data the internet has. So these models are trained on such huge data and they’re probabilistic in nature. They are made to answer you with such confidence.

So what exactly hallucination is hallucination happens when these large language models, for example, ChatGPT generate information that has no grounding in its source. Data or in reality, it is not making a mistake based on in complete information, it is creating information that really does not exist. So for example, in healthcare, imagine a patient ask an AI powered health assistant about their medication, and the model might generate morphine should be taken with grapefruit juice to enhance absorption.

That sounds plausible. It’s grammatically correct and even those who do not know anything, like from medical background, they may take it, right. It has the structure of medical advice, but it is completely fabricated and potentially dangerous. So, the two, when we talk about like financial services or healthcare, these are really critical sectors.

We cannot. Just use the ChatGPT API and build an Aagentic framework and just deploy it on the production for the patient to use or for normal, like bank customers to use it because then there will be consequences of it. So we need to build various guard rails, with comprehensive evaluation framework to mitigate those challenges.

Before going to production there are a lot of layers that we need to take care of. And I can walk you over through one by one. We need to understand first the data, that data is most important thing. For example, these agents, the agentic framework when we talk about we need to understand a problem, we should not try to fit. ChatGPT is something like for customers, it’s C2C like you and I try anything and we type a query and it answers it. But when we say we are building an enterprise level solution, we need to think of a way that it sounds, it does not give us fabricated information.

So our problems always start with a small problem. Try to solve it, create a lot of synthetic data. Lots of actual data to test it, and there are frameworks for that. So, yeah. 

[00:08:51] Ghazenfer: Well, thanks for sharing these, so now the next part is the bias. 

[00:08:57] Dr. Swati : Yeah. 

[00:08:57] Ghazenfer: So, AI have a lot of inherent bias in that. So can you talk about that? How do you identify, how do you resolve that? And that’s your idea, so feel free to talk as much as you want on this topic. 

[00:09:15] Dr. Swati : Yeah, thank you. So let me use an analogy for it that will resonate with professionals in both healthcare and financial services field. Imagine bias as embedded in the foundation of a building. What most people see is the visible structure. The output, the recommendation, the decision. When we find biased output, we can patch them. We can add filter, adjust threshold, post-processing results. But bias happens when the model is trained. So for example, there is a lot of internet data. I told that these models are trained on internet data, but do you think that all the sections of the community are represented equally on the internet based on the history. 

History has not represented everybody equally. There are some segments that are represented highly. Some segments are not represented at high. For example, women scientists, there are less number of women scientists and many number of male scientists, for example. And if we give that data to the model and the model will say that. There is a high probability of a man to be a scientist than a woman, which is when you use these systems everywhere. When we talk about GenAi is going to change the world, they will be AGI and when we are building these bias systems, their decisions will be biased based on the data they are trained or they have seen.

For example, if you build a recruitment agency and you are using an automated AI system to filter the resumes. And somebody applied for a position of a doctor. And in a word, embedding doctors is more closer in the latent space which is not visible to us. But there is a word embedding vector representation of the text.

So in the latent space, the word doctor is more closer to the gender men as compared to women. So when a woman or a man both apply for the same position of a senior resident doctor. Who will get whose resume will get picked? Do you know whose, it’ll be the male resume that will picked, same goes the demographics, same goes the racial thing.

So there’s so many types of bias and where we need to work. It could be the age related bias. It could be the gender related bias. It could be the race related bias. It could be the demographic and any other bias where the data is not equal. And the models are not trained on the equal data, so that can lead to bias.

So it’s that bias exists in how the model represents concepts internally. For example, during my PhD, I did the research on the redlining data on the zip codes, that there’s a high probability of the people living in a particular zip code to get rejected, like they will default more. The model predicts, like these people, those who are living in a particular zip code, they tend to reject more as compared to the other.

So if I belong to a particular zip code which was not clearly actually trained in a good way in the model, and I’m highly educated. And if I live in that zip code, there’s a high chance that my application will get rejected. So these biases and fairness do not get reflected to the normal people. But if GenAi, AI systems and all this system will be everywhere, then we need to think twice. Not just the accuracy, but these hidden patterns that these models are trained on. 

[00:12:41] Ghazenfer: On the same note I do know, like how in the general world, in the hacking world or in you know there are different wrong information presented for the other side’s benefit.

Obviously it goes into the biasness as well. So whether it’s certain groups are talking more about like allopathic against the homeopathic or the medicine versus natural. So I mean, a lot of those things, so as you are getting this data like a lot of time are, a lot of people are probably feeding that bad data. In some cases maybe crossing the boundaries, but may not necessarily be those. And that’s creating those paths because LLMs are being trained on that data. 

[00:13:22] Dr. Swati : Yeah. 

[00:13:23] Ghazenfer: What could be done to avoid that. Is that a regulation thing? Is that the government involvement thing? Is that as organizations, companies should do? Is it at the individual level any thoughts on that? 

[00:13:40] Dr. Swati : Yeah, I think you have correctly pointed out, sometimes it’s not just the data. Sometimes people inject the wrong data in a way that. When we say it like a prompt injection or a model injection, they want to manipulate the data and there should be regulations for these kinds of actions.

For example, recently there is a news give some person how to make a chemical formula and something which is not supposed to be shared with the team, so we need to share. When we deploy these models in a production. It’s very important for us to tightly scope how these models answer and how these models predict it.

Because sometimes the GenAi model, the large language model, they answers something out of the scope. They do not need to, or they’re not supposed to do that. So, tightly scoping by giving the negative prompts like what they are not supposed to do it that could be done. Making the guidelines and the regulation if the model are not following a particular guideline for example, HIPAA guidelines we have, you cannot use actually the customer information, you cannot store it. And if you’re using it as per the HIPAA guideline, then there are different aspects of that. There are different kind of a guideline.

Also like in the US we have a guideline that we cannot use age zip code now and other various like, name of the person and the race of the person to make credit risk models because these are the personal information and one should not introduce those things into the model that can induce biases.

The models are built on the feature so when you are deciding that this is a problem you’re trying to solve. For example, the insurance problem, you’re trying to solve a prior authorization problem, you are trying to solve a credit risk modeling problem you’re trying to solve. For example you pick the features like, so you need to eliminate those features that can introduce the bias, that’s the first thing as a guideline, or as a regulatory compliance that could be done.

So there is a model governance team in enterprise settings, especially in financial healthcare. Based on my experience, they do not allow the model to go in production without reviewing it completely, that it is biased free. And that it is fair for all the sections of the people and how it introduced. You cannot use the gender. You cannot use the age. You cannot use the zip code. You cannot use the name and other personal identifiers like PII, information that can induce the biasness in the model you need to remove that.

And then a complete simulation and back testing is done to understand that, how the model is performing, that as part of the compliance with these teams have to do the data, data scientists team have to do, and now when I talk about the GenAi, GenAi also needs to follow the same thing.

The misconception in the industry is like you just use the API of ChatGPT Claude or something, build some agentic framework and just deploy it. No, the GenAi models also need to go through the same cycle, filtering the data PII reduction, applying all the guard rails necessary securing your data and then time to time like in traditional models. We do drift check or model performance check how the model is performing, if the model is giving the correct output or not. Same with the GenAi. We have to do weekly, monthly and quarterly checkups. How these systems are performing, if they are performing within the scope.

If not, then what’s the reason and completely try to mitigate, like as soon as we get to know, okay this particular model is going out of the scope is not fair or through the prompt injection like you pointed out then we need to, it’s an enterprise actually, it’s an enterprise level duty because there are countries that have made it country level regulation.

But I feel that it should be the duty of every person, every data scientist. AI, ML engineer, or the enterprise, those who are developing those models and deploying it. It should also be the responsibility of those people to make it fairer and bias free and within the scope so that it does not induce the wrong output.

[00:18:00] Ghazenfer: Awesome. Thanks for sharing that. So you build a system that involves obviously LLMS, RAG AI agents. How do you decide when these advanced technologies are truly necessary versus over engineering? Because we see that very common, like we’re, the companies are being patient. Okay. That we will use this one, but. Is it really necessary? 

[00:18:29] Dr. Swati : So these days, something, it’s very interesting is happening. Like everybody wants to fit all the problems in GenAi. Like they want to just join this space of GenAi and want to solve all the problems through the GenAi which sounds good and appealing, but we know GenAi is not good in solving all the problems.

It is good in unstructured data because it’s trained on billions of tokens and it is very good in identifying the pattern, whether it could be question answering, summarizing documents, extraction and all of that, which is very good. Even sentiment analysis. Intent detection, normal chat bot and conversational system.

They’re very good and efficient as compared to the older version that were trained on a spacey or dialogue flow and something. But when it comes to the structured data, whether it comes to the forecast, whether it comes to the mark of chain models, whether it comes to the propensity model or the risk model or underwriting model, they are not that great in that like or causal models.

They’re different kinds of models, right? They need to be solved through the traditional. Data science, knowledge and that’s why it’s very important these days. To hire the right set of people. Those who just don’t know GenAi, like they, some people know only GenAi in this race and they try to fit in everything.

We need the people, those who have the understanding of both the aspects, both the pillar, the traditional and the GenAi vertical, and then they can make a decision like the strategy. We need the people, those who know the strategy and break down the complex problem into the simpler problem. And there could be like in one part of that problem, it could be the traditional, and in one part it could be the GenAi and it could be a mix of both.

Like we can call the traditional model from the chat bot also so we can combine both the things. So these days, the important has become in the expert, those who know both the verticals and they know how both can work together. And also, it’s not just the chat bot it’s something, the workflow that matters.

The workflow has different stages. For example, the one stages predicting if this customer will default or not. The second stage is the processing the document in that stage, in the workflow will come the GenAi. In the third stage will come something, the underwriting aspect, then traditional will come. And in the fourth, will be like summarizing the overall summary where the GenAi thing will come. So it is kind of a workflow automation, but with the help of both traditional and GenAi aspects. 

[00:21:15] Ghazenfer: Yeah, even in the GenAi lots of people obviously just assume that the ChatGPT is AI. But just uploading your content doesn’t mean, so once you have a lot of that data, how do you train that? How do you use that? That’s where you said, as you said, it’s a combination of those things. 

[00:21:33] Dr. Swati : Yeah. 

[00:21:33] Ghazenfer: Somehow there’s a lot of misconception about the AI that, oh, it will just give you always perfect result by just giving you two documents. But then once you start digging deeper, you realize it’s way, way more than, more than that.

[00:21:47] Dr. Swati : Yeah. 

[00:21:48] Ghazenfer: Right, so many organizations struggle to move from AI experiments to production. 

[00:21:54] Dr. Swati : Mm-hmm. 

[00:21:54] Ghazenfer: So what is your experience? What are the biggest blocker and what actually works? 

[00:22:00] Dr. Swati : Actually based on my experience, what I figured out there are different kinds of skills are required at different stages. So, although we have a team that is good in research and doing the experimentation, we don’t have the understanding how to deploy it like on AWS Azure or GCP. So we need those set of people. Like we divide the data scientist team and the MLOps team. That’s why there is a proper segregation and now comes the AI ML engineer role.

So when we say that genAi will going to, you know, diminish the number of jobs, or it’ll replace, yes, it is replacing, but we need more people of the right set of skills. So why the companies are not able to move from experiment or POC stage to the production. The number one could be the lack of understanding, how to scale it, how to deploy it, whether to use ECS Fargate, whether to use serverless lambda or how to do the observability, whether to use Lang Smith or other telemetry, how to evaluate it, whether to use ML flow or are there any other scripting that we can use.

So there are the right set of people that are required for it to scale from POC to the production. And it takes a lot of effort and that’s why many companies are not able to move it from there. And the other thing is maybe they found that this problem is interesting to solve through GenAi and they did it, but at the POC state they found that it’s very difficult to basically put all the guard rails or to evaluate everything at the POC level.

We should not put so much effort going to production because they are not finding enough value or the person who is building it against experimenting it not able to actually convey the right value to the higher management people, the leadership that, how this particular GenAi thing or a traditional model or any other thing will able to solve the problem and what ROI we are getting that is very important. So there are two aspects of that. 

[00:24:10] Ghazenfer: Cool. Thanks for sharing. So we were talking about LLMS, so there are many, LLMS, many models: OpenAi, Anthropic, Claude. 

[00:24:22] Dr. Swati : Mm-hmm. 

[00:24:23] Ghazenfer: Grok, Perplexity would talk few. If I mean, how do I decide which one to go with? It’s confusing. There are so many options available now.

[00:24:34] Dr. Swati : Yeah every LLM has its own unique selling point. When you start using them you can see the complete differentiation. So for example, ChatGPT is very good in summarizing or in creating essays, email or something like that. That’s something probabilistic you want to generate more and more.

But Claude is really good in code. So if you want to write very good code, and you can rely on Claude which is giving at most like 95% and sometimes even a hundred percent correct. So Claude is good in coding. ChatGPT is good in your day-to-day activities, your email writing, essay, writing, and giving you the summarization.

Perplexity is good in deep research. If you are from academic, you are a researcher and you want like read a paper quickly, want to segregate the points, then perpetuity is very good.

And now comes the Gemini. Gemini is this. Is having a mix of all. I would rather say it is having the image generation capability, also Nano Banana it has, then it has code generation capability. But in code, I would say that Claude is better than Gemini. And now even ChatGPT has catch up Gemini in terms of image generation. Earlier nano banana was there and Gemini created hype, but nowadays I’m using ChatGPT also, and it is also creating equally good images. So yeah, one can use either Gemini or ChatGPT. That’s their goal. 

[00:26:09] Ghazenfer: Okay you’ve said before that AI without evaluation is a blind trust. Can you unpack that? Especially for people in healthcare. 

[00:26:20] Dr. Swati : Yes. So, when we talk about the high stake domains like finance and healthcare factual accuracy matters a lot first. It’s not like how well grammatically these solutions have produced but how factually they are correct. So when we talk about like, traditional model, we had test data, validation data, training data.

So what happened in GenAi, we don’t have the training data until, and unless we are doing fine tuning or we are using an open source model like lama, then we have our own data we can fine tune these models and ask them how to behave. That is also very efficient. There are different techniques, QLoRA, LoRA, the different techniques in the market that could be used. However, if you are just using the API part we are not actually building a gold data set, validation set and test set that’s required before going to production or at a UAT stage that’s required, that is must.

The second thing is hallucination detection. How often we are checking or frameworks we have to detect the fabrication, what majors we are, are we doing LLM deterministic programming or we are doing LLM stochastic programming. So there are different, uh, differences in that. Are we setting the temperature zero? Are we setting the guard rails? No, not only at the query stage when somebody is asking, but also at the output stage that the answer it is giving is correct and it’s in the database.

And that’s why RAG come into the picture and now Graph RAG has come into the picture because RAG is Retrieval Augmented Generation that’s giving you an argument layer on the top of these ChatGPT or any other LLM systems you can feed in your external data, which is your enterprise data that could be done. And the second is nowadays, I would rather suggest just going with the traditional RAG which is a plain RAG. Go with the graph RAG architecture because graphs are very good in identifying the entities, nodes, and the patterns, and they are able to retrieve fast and give you the correct information. 

So you can mitigate the hallucination from for example if you are a hallucination it’s like 80% or 20, 30%, you can mitigate it to one or 2% with the graph RAG so these are the techniques like knowledge graphs could be used, and then bias assessment you can train some small language model to detect the specific biases what the system is not supposed to do and test it against that, that could be done. 

Then regex pattern, numerical accuracy, unit testing could be done and there are some compliance, regulatory compliance checks that are required. You need to do the unit testing and deterministic programming for that. So there are different, different layers that could be put before deploying the system to the production. 

[00:29:19] Ghazenfer: Yeah. So what advice would you give to young professionals, especially women who are entering AI in healthcare or financial, any of the, any advice for those people?

[00:29:31] Dr. Swati : First of all I would say that your voice matters. Your dreams matter. Do not just think like if somebody says that what will you do if you are just doing PhD or you are going outside your country at the age of 30, 35. Do not listen to them. Always listen what your strength is and what your expertise is, and you can make your career at any age that’s what I want to say.

And the second is, don’t assume your questions are too basic or your ideas are too unconventional. The industry needs diverse perspectives and that’s how these models, the AI system, when they have the diverse perspective, they can have built those models from a diverse angle.

The second is build bridges, connect technology with those who are coming in the healthcare domain with clinical practices, with compliance. Just don’t think from tech perspective, pure tech perspective, always try to build your expertise around a domain, because nowadays when code could be automated or there are a lot of work that could be automated, that skill, the new skill that you need to learn is your domain, whether you come from legal domain, whether you come from health domain, finance domain, be expert in that, and then add the technology layer on the top of that, that one thing.

And the third is stay curious. The field is evolving faster than we imagine. So be a lifelong learner, embrace uncertainty, and always be a, you know, a curious child just to go and research more and more, because I think learning is the only constant thing that we should appreciate in our life.

[00:31:18] Ghazenfer: That’s a very powerful answer. Very powerful advice. Yeah. And there’s, and no question is the basic question, so it’s more important to have the question. People who ask questions are a lot more visible and appreciated than the people who are not asking questions.

So what is the most innovative AI technology that you found? I mean, like among all of these, like new, there’s so much coming nowadays. Is there one thing that you really loved lately that was mind blowing? 

[00:31:55] Dr. Swati : These days actually I admire how knowledge graph works. How these, uh, I have worked with Cigna also two, three years back. I did work as a data scientist with them and they were trying to solve a very easy problem, like unstructured data. They were working, but it was, it was very difficult when we applied these large language models directly on that data to summarize it, because they have like pharmacies, they have customers, they have zip code and they wanted, and the doctors, and they wanted to create a crisp summarization of the customer feedback with respect to a particular doctor, the pharmacy, like the customer have issue with the pharmacy or with the doctor, or with their website or with the mobile app or any other like issues.

So the graph, when I fit the whole data into the graph structure, and when I ran these large language models or any other algorithm on top of that, the results were really amazing so this knowledge graph and the graph architecture really amazed me how it worked.

And the second thing that these days is amazing me, that the mathematics that we read in our high school or secondary high school, even at bachelor’s, like engineering is something evolving. Now I can see and I can see how things are evolving and I can connect the dots, the differential equation, integral linear algebra that I read during my young age. I can see how I’m now using in real time algorithms and practical ways.

[00:33:36] Ghazenfer: Thank you. One last question before we wrap up. If people want to learn more about your work or explore responsible AI 

[00:33:45] Dr. Swati : mm-hmm. 

[00:33:45] Ghazenfer: Where should they start? 

[00:33:48] Dr. Swati : So responsible AI matters not just in technology but in any industry that you serve on. So they can look at my LLM evaluation toolkit, which is an open source project I created and they’re open to contribute it because it’s an open source project. And I would love to have diverse thoughts and aspects like, how can we make it more comprehensive? Please do welcome and when we think of responsible AI, just don’t think of bias or fairness there’s more to it.

Responsible AI is how to make a system more robust. Not in terms of just bias and fairness, but in terms of security also, like Mr. Mansoor had mentioned that it could be prompt injection or it could be different kinds of issues that could happen and then there are different product production issues.

Always think responsible AI in a holistic way, in a comprehensive way. And it could be either in software development if you are a software developer, or it could be in AI ML systems if you are a data scientist or AI ML engineer. So, you can connect me with on LinkedIn and I’m happy to answer other queries also regarding it.

[00:35:00] Ghazenfer: Awesome. Thank you Dr. Swati, this has been a great conversation. Thank you for sharing your insights and experience with us. We really appreciate you joining us on Lessons from the Leap. Before we wrap up is there any final word and where can our audience connect with you? Is LinkedIn the only option or do you have any website, anything you wanna share? We’ll put it on our podcast information as well. 

[00:35:25] Dr. Swati : I will share my email id, I will share LinkedIn and you can reach out me there. The last word I would say is stay, stay hungry, stay curious, and stay foolish. So these are the three words I basically say be a lifelong learner yeah.

[00:35:42] Ghazenfer: Cool. Thank you very much. Have a good rest of the day. 

[00:35:45] Dr. Swati : Thank you