Bet on People

Bet on People with Triparna Chakraborty

Season 1 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 48:14

On this episode of Bet on People we sit down with Triparna Chakraborty. We recorded this episode before we locked in our format so this conversation takes a bit of a different path. 

Triparna Chakraborty is a People Business Partner at Credo with 12 years of experience across the full spectrum of HR — workforce planning, hiring, and employer branding — and an engineering background she never left behind. 

Before moving into HR, she worked across multiple countries, and that global, cross-functional lens now shapes how she approaches AI: she works with both individuals navigating career change and organizations building AI strategy, helping bridge what the technology can do with what people actually need to trust and use it. She's one of the more grounded voices on what it actually means to center people in how new technology gets adopted.

In this episode we'll talk about:

  1. The micro ways that individuals can use AI to help prepare for interviews
  2. The macro ways leaders and organizations need to be thinking about AI adoption 
  3. The important nuances leaders need to understand to navigate conversations around AI uncertainty
  4. & so much more!

Connect with Triparna and Euda:

Follow Triparna Chakraborty at https://www.linkedin.com/in/triparnachakraborty/

Follow Euda at https://www.linkedin.com/company/euda-io/

Learn more about Euda at euda.io

Subscribe to Keegan Evans on Substack at https://eudaceo.substack.com/ 

SPEAKER_01

Hey everyone, welcome to Bet on People. This podcast is about the stories behind leaders' toughest decisions and why betting on people is good business, not just good feels. This episode's gonna sound a little different. We recorded this before we would really locked in our format. So instead of our usual structure where I walk through three decision stories with a guest, you're gonna hear a discussion moderated by you to chief of staff, Molly Van Duzerzic. Even though this episode doesn't follow our usual format, it's absolutely what this podcast is about. The human side of these big decisions, the vulnerability. And how, when you center people, better business outcomes tend to follow. So here's our discussion about the human cost of bad AI implementation. Alright, we're here with uh Traparna Chakraborti, HR business partner and former engineer, uh, which fits right into Yuda's uh preference for non-standard uh career paths and and and broad experience. Triparna, welcome to uh the the conversation. Can you I'd love to have you just tell uh a little bit about yourself.

SPEAKER_00

Thank you, Keegan, for inviting me and Molly. I'm Triparna, I'm an HR business partner, and I have 12 years of experience doing workforce planning, hiring, employer branding, the whole spectrum of HR. Um but prior to being in HR, I was also an engineer. And I have worked in a few different countries, and now I'm based out of California in the Bay Area.

SPEAKER_01

Love it. And we met at a HR uh association event a couple of years ago, and it it uh and it's just been great to continue to stay connected and see your see your success and progression. Uh and you joined us at a couple of our our roundtables last year. Uh, but we're gonna talk this today. We want to talk about uh one of our overlapping common interests. You are very active, uh, and everyone should follow you on LinkedIn uh for your for your content around uh AI and AI adoption. Um so we're we're gonna talk today about the cost of the human cost of bad AI uh implementation. Um I'm gonna let Molly start this conversation off though.

SPEAKER_04

I wanna Traparna start start off with asking you a question kind of directly tied to your very unique intersection of engineering and HR. And I'm curious when you think about bad AI implementation, specifically with your intersection, what does that look like?

SPEAKER_00

Yeah, um, you know, I think about two different ways it plays out. Bad bad AI implementation for an employee and bad AI implementation for you personally when you're working through technological changes. So for an employee, I would I would assume it's a lot about trying to figure things out and being really confused. The word is confusion. Um you don't have clarity on how your role could get impacted because of AI, how it would change. So that's that's one segment of it, right? But also from an individual lens, I think it's twofold. Um there are two personas. Um one is, and these are two extreme personas, right? Like one is the paralyzed professional. And here is a person um who who is constantly rethinking whether to use AI or not. Um they don't know where to start, and they're figuring out if if it even makes sense for them. So they are trying to look for permission, guidance of some sort. But then on the other side of the spectrum is the hyper user, but who is also the silent user. So they're using AI well, but they are not talking about it. They don't know how to bring that into their work and how to really communicate about that. There is also, I think, underlying um the anxiety, the replacement anxiety. Will AI replace me? But most of us fall somewhere within that AI inside spectrum. We are in between. We are neither paralyzed, we are working on AI, but but not but not always super smartly, but also like we are not the hyper user yet. So we are in the spectrum, and bad AI implementation is when you don't know where AI could add value to you, and you're always just thinking about automating. But the real question is, how can you learn using AI?

SPEAKER_04

I love that. I really, really love that. I was a little bit of an AI hater in the beginning, and I have come full circle, I will admit. Um, but I think it's because of that point. It's because of understanding that there is a value, and the value can be to grow and to learn. Um, so Traparna, we talked a little bit about the individual level of bad AI implementation, how that affects us as an employee, maybe how that affects us in our personal journey in our career. Keegan, I want to ask you, and I want you to share with others from an organizational culture standpoint. So when we're thinking about the whole, what is bad AI adoption? What does that look like?

SPEAKER_01

Yeah. So I really like how Trapana identified the kind of two factors of the employee experience, the leader experience, and then the employee as an employee, but also the employee as an individual. Um when I think about organizational adoption uh and the culture standpoint, it really is uh bad AI implementation looks like missing uh the details and the lived experience of the members of the team. Uh the the on the far side of this, we see a continued breakdown in trust, um, the the where there's a belief that they that we are training our replacements, uh, and that uh that that can break down trust amongst each other and the organization. Um alternatively, there's this feeling of just checking the box. We see this a lot in organizations where the leaders say, uh just use AI. Just go go use it. I bet let's let's put some AI on that. Uh I I I bet I bet we could solve this with AI, without really understanding and breaking it down. Um what that without intentional AI adoption, without the without the intentional cultural adoption, we we see a continued lagging behind. There's not the there's failed pilots, there's it's kind of there, there's expectations, there's shadow use, um, where people are using it without uh just under you know on their phones to to speed things up, but without the uh awareness and permission structures of the organization. Uh and that leads to organization just lagging behind. Uh and as you lay, as we lag behind, we see others, we feel more pressure, there's constantly changing AI technology, and that leads to more exhaustion and burnout as the purpose uh doesn't get filled in. We talk at Uta, I talk a lot about uh uh in the in uh the AI amplifying our humanity, and that acknowledges that it's going to amplify all the good and the bad parts. So bad AI, bad AI implementation at an organizational cultural level looks like amplifying all the wrong things about uh our humanity and the organization.

SPEAKER_00

I really like um, Keegan, how you mentioned the difference between AI implementation and intentional AI implementation with the right guardrails.

SPEAKER_01

Yeah, yeah. Guardrails is something we talk a lot about.

SPEAKER_04

Before, because I think guardrails are important, and I and I do want to talk about that. And I imagine, Traparna, you have a way to think about that. And Keegan, I know you have a way to think about that, but I wanted to tap into something that uh at Yuda, we've talked about a little bit. And Traparna, I think you do a great job uh talking about this on LinkedIn. Um, I want to I want to ask you both, what does it mean to normalize AI use? What does it mean to talk about what we're afraid of, what we're unsure of? How do you do that, Traparna, at an individual level amongst your peers or at your job? And Keegan, I want to ask you at a at an organizational level, how do you lift up the voices of people who are saying, actually, I am a little concerned? Um, I don't know how to do that. So, Traparna, you talk a lot on LinkedIn. What motivates you to do that? Why do you think that's important for your peers? And yeah, just tell us a little bit more about normalizing AI use.

SPEAKER_00

Um, one of the one of the drivers behind why I started to do this on LinkedIn is because the events that I attended, the theme that I heard from there from many people is like, yeah, I use AI, but but I'm not totally sure about how it can add value to me apart from helping me craft an email or helping me, you know, automate some tasks. And that got me thinking, right? And then then came the next conversation where people started talking a lot about tools, um, Mid Journey, Nano Banana Pro, and all that. So people were experimenting with tools at some point a lot. Um I found that there was a gap between how else I can use AI, and oh, these are all the shiny tools that I have, you know, to play with. And that gap for me is how are you using those tools? How are you using all the use cases that are there for AI to actually progress your own development? Uh, and that got me to use AI, that that got me to actually use, you know, plot code and all of that to really play with AI. And that also got me to create content around how we could use AI smartly. I call it that. Um and one of the big use cases there is to use AI as your personal accelerator. So I love to do this. Um, I take out the sensitive confidential information, you know, names, and I create a document um mentioning my my career goals and what I should be thinking about. I plug in um after I have taken off the sensitive information, I plug in the emails into um any of the LLMs that I use. Um and then ask AI to actually check my CV, check some of my communication patterns, and point out my biggest trends and my areas of development. Um one of the areas of development that I receive personally that I've been working on is how to enhance my negotiation skills because I'm in HR. I need to be able to negotiate well, but also like personally, that's a great skill to have. So, you know, every person can think think of that for themselves on what they could be working on. And AI could actually become your coach if you give it enough prompt and data to work off. That's that's one way. And the second thing is also, I think, um an element of that that is skill stacking. It's not just about developing soft skills or power skills, but also you can use AI to develop your technical skills. Like I'm thinking about years back when I moved from engineer to know to HR, we didn't have AI then. I sound old, but AI is a recent phenomenon. Um but we didn't have AI then, so you know, I would pour over coursework, I would meet with people who were deep into the HR world to truly learn. Today, I plug in information into AI, you know, courses, links, um, articles, and ask AI to summarize if AI were to create a course for me for a specific skill, how would that look like? So there are 10 different ways to use AI. Right. And that's that's why I have started making this content on LinkedIn to actually let people know that hey, there is this other way too.

SPEAKER_04

Yeah. So it sounds like for you, putting your voice out there, putting these examples, these use cases is meant to educate people so they can succeed, right? So like 100%. Yeah, I I love that. Keegan, from an organizational perspective, and we've been inside the room for some of these conversations. Why is it important? What does it look like to create a space for employees to talk about AI?

SPEAKER_01

Yeah. I'm actually going to uh pull the old coaching trick on this one. And and uh Triparna, I you are very um assertive and clear in how you can positively impact other people. But I'd love to hear what was your experience inside the organizations you were working in uh as AI was becoming uh generative AI, especially was becoming more publicly and prominently available. Uh what was the environment like and the permission structure you felt you had to try AI out and learn more? Where where was that working and where was that not working, especially?

SPEAKER_00

Yeah, um, you know, in the last organization that I was there in, um there was this organization effort to actually help people know about the different tweets of, I mean, the the different tools that we have, right?

SPEAKER_02

Yeah.

SPEAKER_00

So there was standard coaching. Um, we also, the company also conducted trainings at an organization level. And there were signup sheets so we could sign up. The good part about those trainings were that everybody was able to join live, ask questions that were bothering them, and also have a good debate and discussion. I really appreciated that myself. I think at some point it boils down to your work. Now, I am in a function, HR, which I think uh should truly be driving a lot of um AI adoption and being at the forefront of HR. So what was interesting was HR as a function, and this is not pertaining to my last organization, but when I attended those events and I started speaking with different HR folks, I realized that HR as a function was still catching up while the business was ahead. They were thinking about how AI could impact, you know, marketing, digital marketing, omnichannel. And AI was still um AI for HR was still a little behind. HR was a little bit lacking in that sense.

SPEAKER_02

Yeah.

SPEAKER_00

However, I do acknowledge that different industries and different companies do it differently. So one of the things that in my previous company we we had started to do apart from trainings is also have open conversations within the team. And now in my current company, which is a semiconductor, very technologically advanced, you know, like focused company, uh, there are much more richer conversations. There is that is also that element of hey, we need to bring people along in the journey. So I think for me, um the positives is when an organization thinks of it organizationally, how it impacts people, and then really makes it specific for the team. What I would like to see overall in general is that you break it down even further and break it down to the level of the manager. Like the manager should be able to speak comfortably to the people reporting to them about how their role is changing and how they could infuse AI little by little, that one person change that could impact overall their job, but also overall the goals of the company positively.

SPEAKER_02

Yeah.

SPEAKER_00

And I think that is where many companies are a little stuck. I don't think it has boiled down yet.

SPEAKER_01

Yeah. At a couple of points in there, uh you discussed uh or you shared examples of where there was there was explicit discussion about AI, some discussions about concerns and fears and space for that kind of thing at organizational levels, or you and then, as you were just saying, uh empowered managers being able to have those kind of conversations. What worked to create the space for those discussions uh to feel helpful? Uh or without naming any names, what what didn't work? What shut down those uh kind of conversations?

SPEAKER_00

I would say um being very early on in the AI adoption journey, those were the feedback we were collecting at that point in time, if that makes sense. So we were so that was the time when the step one was truly organizational trainings on different suites, having more open conversation. The step two was to truly understand, and you know, here is where consultants played a role too, like people bringing in different expertise, um, sharing knowledge. That is where that is where I think we were getting into. And then of course I moved on to another organization.

SPEAKER_01

Yeah. Wonderful. Yeah, and and I asked that, and I and I go in on that because going back to Molly's original question around in this section of what supports uh the kind of conversations that advance AI adoption at an organizational level. Something we see time and again in working with our clients is there is a varied level, and each organization is different, has different folks across different uh places on this uh mindset journey. There's a varied level of acceptance or reluctance or fear or antagonism antagonism, there it is, uh, for uh toward towards AI adoption. In most situations, when we're able to create the space where there's an earnest opportunity for, hey, this particular thing worries me, and some shared conversation about that without judgment, we might not get someone completely over to yay ai, certainly not in one in one session. But so much of so much of this change, so much of this uh this technological adoption is about the pace of change and how our human bodies can't adapt to change as quickly as this is changing. And so one of the biggest factors that we see for success is organizational leadership uh modeling the permission to be along on that journey and to have and to normalize the kind of concerns and fears and address them uh non-judgmentally, but in a in a we're making progress towards how do we solve this, uh, how do we address this? Um and so that that you know coaching, a coaching mindset, you mentioned that earlier, is really important for for leaders and and can be especially hard when leaders themselves are trying to figure this out uh and adopting. Um and bringing in that clarity uh is uh but but that space is the first step that then allows real discussion to develop the guardrails that will they'll develop the permission structure that can then be applied to the specific workflows.

SPEAKER_00

If I may um ask to this, you know, from from your experience at a higher level, yes, the the organization maybe is trying to be really intentional, right? Which which many organizations are working on their intentionality, like becoming better.

SPEAKER_03

Yeah.

SPEAKER_00

But how I mean, how can those big organizations trickle that intention down to that one employee from what you see?

SPEAKER_01

Yeah. Yeah. It's uh it's a great question. Uh and it it's it's the sticking point for so much of this uh this kind of adoption and frankly, organizational culture work in general. Uh so much of it comes down to modeling, uh, but also matching words and actions. And so there's when we talk, I talked earlier about just use AI as a fairly ineffective way to get AI adoption across. Um, helping helping and creating the structures and frameworks uh to find this specific opportunities to use AI, to practice uh uh practice experimenting with it and playing with it. Uh so much of the potential for AI in terms of individual amplification is gonna come from experimentation and play uh and from individuals. The power to impact organizations is gonna come from unlocking the individual capabilities and how it fits in it individual ways. But you can't do that unless there is uh uh unless there's psychological safety, uh, unless there is uh and and a clarity and permission structure to to try to do that. One of the ways that leaders can really uh uh help is to be intentional and uh come up we what we do is we help develop guardrails and specific guardrails around what is the professional identity. Uh are we lawyers, HR professionals, mediators, uh whatever that whatever that identity is, as these people, these are behaviors that are important to us. Uh and that's not new to AI. It's just about being intentional about that thought and then communicating what that means for our AI use. The other A way we develop guardrails is around uh identifying risk, uh uh specific risks to an organization and and leveraging severity and probability of those risks, and then specific mitigations uh of how how to adopt those. Those guardrails then create that permission structure. Employees, teams, leaders know what is in bounds and what is out of bounds. And that confidence of knowing what's in and out of bounds gets to the root of the biggest challenges we see uh in adoption, uh, which is uh people don't people don't want to try it in in public because they don't want to be doing it wrong. They don't want to be thought of as cheating, and they don't want to be thought of as less capable in their jobs by using it. Which takes me to the second and most tactical thing leaders can do, which is to use AI in public, uh, not necessarily on LinkedIn in public, but in front of their employees, talk about not just where they used it and it worked, but where they used it and it didn't work. Uh some early on in our time, uh, when I had when Molly was still in her uh AI reluctant phase, uh I would get that's very that's very generous people. I would share on Zoom whatever the screen and claw or chat GBT I was using. Um and it was painful at first, especially at first, uh to for two humans to be watching a uh a chat bot uh respond. But it and it was especially painful when I'm the CEO. I don't want to be exposed and vulnerable and doing something wrong and and and and risk instill uh risk undermining confidence. But the reality is by choosing that vulnerability, by choosing that I was instilling confidence that it's okay to continue to experiment because I didn't get them all, I didn't get things wrong all the time. It didn't fail all the time. But where it failed, we learn from it, and where it really worked, we learn from it. So setting up the conditions and bringing the clear guardrails for permission structures and then actively walking the walk uh so that uh so that your team uh sees it. Those are the two uh ways that leaders can do that.

SPEAKER_04

We've talked a little bit in different ways how to hang on to some sense of control during what is a really kind of scary time for a lot of people. People are worried their jobs are going to be taken, people are worried they're not doing AI right. Um, and we've touched on a couple of opportunities, but what are some additional ways? And Trapar and I would love to hear from you from on the individual level, Keegan, organizationally, how do we gain control? When we sit down on our computer and we like start the day, like how can we kind of quiet the noise and say, okay, here is the here are the steps, here are the micro things I can do to feel like I'm in control during this really intense time of change. Trapana, I'm I'm curious what you have to say about that.

SPEAKER_00

Yeah. Um the one thing I was thinking through as Keegan was speaking too about, you know, fears and anxiety in people. And the reality, the reality is such that AI is going to be changing jobs. AI is already changing a lot of jobs, right? Software engineering has changed drastically. Customer service is changing right now as agent AI has taken over. So and marketing is one of those jobs as well. And I and I think all jobs are getting in some form or manner impacted. Um there are of course jobs which are much more I mean by skilled tradesmen and plumbing, etc., which which AI doesn't have the intelligence yet to take over. Because that's just something that a human needs to be there. But other than that, jobs are changing. So the first and foremost thing to actually gain some semblance of control is to know how your role could potentially be changing. And the one thing that I will tell people is that don't think about your role in that organization, because that's one entity. Think about your role from a macro standpoint. Um so so if you are in healthcare, AI is transforming healthcare by accelerating drug discovery time, shortening the time by 40 to 50 percent. So if you're in healthcare, that's an industry shift. But if you are a software engineer in healthcare, then you will have to think about how your role in that industry is changing. Uh, plot code and other different tools have have made it tremendously easier for people to know to code now. Um so then as a human being, our role primarily lies in judgment and auditing. Auditing the work AI does, having guardrails around how teams are going to function, with AI being one of the team members, essentially, right? And also um once you know that, like once you know that these are the these are the areas you will play in, given that your role is changing, you start developing skills on that. That's that's that's one use case of understanding the role, how it's changing, developing skills to get ahead of it. Um the second thing I would say, and this is specifically for those who are trying to look for jobs now, who are job seeking, the job market is tough worldwide. Um the the common use case that people tend to use AI for is to update their resume, maybe, maybe even their cover letters. But what AI can do for you is if you use the right prompts, AI can tailor your interview prep for you. Okay, so use AI like that, that career coach, that career strategist, that interview prep partner. Um, if the company you're interviewing for is a public company, then you have a lot of data out there, like their annual report. You know, like plug that in and ask AI to identify the areas this company is investing in, what are their products, where are they moving, and use that as a foundation of how you speak to in the interviews so that you come across as the professional you are. So um try to use AI, um, specialize to the use case, the role, the market that you are in. And if you are not job seeking, if you're in a role already, then um then you know it's about understanding what else you could do, what other technical skill set you could develop to make it better, to make it better for you, um, to do better stakeholder management. Maybe communication is an area that a person is working on. Um, I'll give you a fantastic example. A friend I know is in marketing. She loves marketing. Um, that has been her passion. But an area of work she needed to do. I mean, she she is not a data person, but her work now has become very data-driven because she needs to understand how her campaigns are working. So the way she's using AI is that she's actually using AI to help her tell a story about her campaign to her stakeholders, to her stakeholders. Um, so you know, like if a campaign is doing it doing well, then which audience is it hitting? What does it mean for the product? How they could maybe change the campaign tonality a little bit, right? So she has now become the de facto storyteller along with being the marketing manager. So AI can actually help you enhance the capabilities that you didn't know you had before.

SPEAKER_04

You, as I'm hearing you talk to Pana, and I in you're kind of laying out a variety of different use cases. I keep thinking to kind of to myself, it's like, oh, these are questions I might ask to a friend of mine who is an expert in that field. Right. And kind of it feels a lot easier and a lot more gentle to say, oh, well, how would I ask a trusted expert or a trusted friend or a peer, someone I work with, someone who works in nature, what questions would I ask them to help me prep for this? Right. What questions would I ask someone about how our market is changing, how this job is changing, what's happening in our industry? So almost uh trying to take the question marks of scary AI and just remember that like you really are just kind of asking a question. And that's all it kind of has to be.

SPEAKER_00

The one point I will add though, and it's it's exactly what you said. Since you're talking to a friend, you also challenge a friend if you feel something they said does not suit your scenario. And AI doesn't have all the context that you have. So, you know, it's going to give you answers which may feel a little off, maybe the tone is a little off. So you challenge AI. You correct AI, you you give it better prompts and data for AI to actually give you the answer that would make sense for you. So judgment.

SPEAKER_04

I love that. And I think that is so important, is also remembering that, like, remember the value of you as a human, right? Like AI is scary and all these things, but we also have value and kind of grounding ourselves in that. And that's that's so important. Yes, one of our core principles here at Muda is amplification without abdication, meaning that it is already always our responsibility as the human to make sure the information is correct, things are aligned with your personal or professional brand. Um, I love that. Uh Keegan, do you want to touch a little bit on anything Traparna said or kind of answer that question of like what where do we get control? How do we ground ourselves during this crazy time?

SPEAKER_01

Yeah. One of the one of the important things that came up uh in in just this l the last part was recognizing that uh AI does AI functions best not when it's a single transaction of information, but it's a conversation. And this helps that starting with that, that uh can help unlock a lot of the blockers that I often see. One of which is the idea of if I don't give it the the right prompt or the right thing first, I'm not gonna get the right piece. When it's a conversation, one, I encourage anyone who can just release that expectation to release it. Two, uh, because frankly, they're getting better all the time. And I just talk to, I don't think about what I'm putting in. I more than, okay, here's a couple of pieces of context that I need to include, and I want to be sure I include those, and then I see what comes back. Um but the other thing is just like that, if you put something in and it's not really fitting the need, fitting what you're expecting, fitting what you're hoping, answering the question, or it spawns more questions, keep giving it more information. I will regularly politely say, Oh, shoot, I forgot to include this piece. Can you consider this uh and and share this information or share the thought again? And and create and creating that dialogue, um and the and and the the lowering your own expectation and and lowering the bar for yourself about needing to get the prompt just right. Um two years ago we had this whole concept of the future, the future career is going to be prompt engineers who know how to specifically talk to it. Uh and uh as the platonic ideal of AI uh uh evolution, the AI has just got better at understanding how we talk, uh, which is which is a useful thing. The other thing that uh came to mind was leaders have a responsibility from the leader's perspective. They have a responsibility as capacitor in times of change instead of amplifier of anxiety. With all of this change, it is impossible to stay on top of everything that AI can do, of every bit of AI news. Um leaders creating the permission uh to focus on what is important uh and and not have to stay on top of everything is it is a big step. And then everyone, just focus on what you want to, uh what you want to stay on top of. Everyone should follow Traparna. Everyone doesn't have to follow Traparna and every other AI influencer out there, um, but you should follow Traparna first. Uh there's going to be sources of information. One of the things that we've known since the since the rise of the internet and especially social media is that we are flooded with information beyond beyond the capacity, and this is before AI, beyond the capacity of our brains. AI is creating more types of information that has to be sorted and organized. How can you use it instead of feeling the pressure to stay on top of everything is a crucial success to staying uh staying in control, being able to control and focus. And then finally, what we work on uh in terms of uh experiencing control is identifying where it can actually be useful, not just because AI is technically capable uh and powerful at this, but what do you actually want to be easier about your life? And this can be different, this is different for Traparna and Molly and me and every single person listening. Um but that's that intersection, we call it the fertile ground uh for for AI adoption of well, this this is the part of my job that I just kind of avoid. Are there ways that I can do it faster or easier uh with AI? Uh and then we don't even have to have that answer ourselves. I just go ask AI. Hey, this is this is me. This is some of the stuff I do. This kind of sucks in my day-to-day. Any ways you think can help with this? Uh, and then have the conversation about oh, well, that's an interesting idea. Let's play around with that and start to build those pieces out. So those are three important factors from uh a leadership structure and a uh and a uh individual mindset uh to uh to to feel that sense of control and agency.

SPEAKER_04

I know kind of a a follow-up question. I wanted to dig in to something you said, Keegan and Shapar. I do want to hear your opinion on this. We have attended um some summits where folks have talked, I mean, everybody's talking about AI. That's all we're talking about. Um, and I want to tap in a little bit to leadership and leadership responsibility when it comes to AI. And Keegan, you mentioned the landscape is constantly changing, the technology is changing, the news is changing, Traparna, we've talked about the um career like careers are changing, it's hard to get a job. When we're thinking about leadership and I think I know the answer to this, is as a leader when you don't know the answer. Yeah, when you don't know how the job is gonna change, when you don't know what's gonna happen in six months, when the technology is rapidly changing. Do you say that to your team? Do you try to fake it till you make it and try to let it like like how do you, as a leader, how do you how do you convey to your team that yes, it's uncertain, but we'll figure it out? What like what does that look like? What does that sound like? I'm happy to hear from either of you.

SPEAKER_01

Um when I uh yes, you do, but how you say it matters greatly. Uh and when I talk about being a capacitor rather than an amplifier of anxiety.

SPEAKER_04

I didn't know what that word meant, capacitor.

SPEAKER_01

And so maybe you already answered this question, but Oh, you want to you when there's when there's a capacitor in an electrical system when there's a lot of high energy, the capacitor absorbs the energy and and make and distributes it downstream in the electrical system in a predictable amount. So um I think I think either that or I completely blew my metaphor in public. Um but you the idea is you want to, as a leader, um put another way, back in the classic uh 1998 film Saving Private, Ryan Tom Hanks explains uh uh gripes go up, they don't go down. Um so complaining and and concerns. We we don't want to put all of ourselves. Now, vulnerable leadership is important because that that connects to authenticity. Uh and so, yes, we say when we don't necessarily know, but we don't put the emotional burden of us not knowing on our teams uh and employees. Uh one of the when when working in the future of work and uh around hybrid flexible work uh coming out of the pandemic when I was at Intuit, uh one of the leaders there used this uh wonderful pithy quote. Uh Our job is to provide clarity where we cannot provide certainty.

SPEAKER_04

Um we cannot provide certainty.

SPEAKER_01

Yeah.

SPEAKER_04

Okay.

SPEAKER_01

There's a very human urge to find certainty uh in things. This is why we we bucket into simplifications and and narratives and how we understand the world. Uh but there's a lot of uncertainty. There's always gonna be uncertainty. It's impossible to get full certainty. Great leaders find ways to provide clarity in what is true, what is what is known, what is unknown, uh, and what we're gonna do about it uh in uh in order to for their teams to be empowered uh to do the work that they that they're they're doing on the on the path uh for the organization.

SPEAKER_00

I I just want to say it's so beautiful, uh, the line, you know, that we provide clarity when we cannot provide certainty. And you know, as as an HR business partner, one of the things that has constantly, I have seen that constantly arise is is for people to know exactly what's happening. And I have that too. We all have that. We want to know what's going to happen tomorrow. But the reality is the CEOs of all the top AI companies, when they go on stage and they talk about the different tools, even they don't know how those tools are going to play out in the future. Maybe there is going to be three other versions of that with a completely different use case coming up. So as a leader, uh one thing that I intentionally tried to do in my conversations with leaders is to let them know that there is a difference between you know uh there is there is a difference between having clarity and communication. You don't need to always have clarity. There is a good chance you will not. There is even the CEO of a company may not in many cases, right? There is there's a good chance, but you can still communicate with your team in a manner, in a way that lets them know that you are doing everything you can to collect the information to think about the bigger picture. So that's where I think communication as a skill comes up. Do you do you have information? All the information? No. Can you give all the information that that that you have? Maybe not, because you are processing that and you're working through that information too. Maybe it's somewhere in between. What you know that the that the person, the employee can action today. That's where that's where the sweet spot lies.

SPEAKER_04

I would love to keep talking because I think we could, but I do want to be mindful of the time that we have left. And so we've talked a lot, we've talked about a lot of different things, we've talked about what bad AI implementation looks like, how it affects people from the individual level to the organizational level. Let's end on a positive note, if we could. In a world where everyone is deploying AI in a way that is um useful, effective, ethical, what does that unlock? What does good AI adoption unlock? What does that look like for us in our careers, in our personal lives, and maybe even for humanity?

SPEAKER_00

Keegan, do you want to take a stab at it?

SPEAKER_01

Oh, sure. I'll go first. Uh in that world where it unlocks uh this, the the core philosophy, why I named the company Yuda is based off of Eudaimonia, the good life or human thriving, which is uh found by contributing the most to society. And the way we contribute the most to society is finding that intersection of internal talents and skills and strengths and passions and loves and uh just stuff we like doing, finding those intersections and then mastering those intersections towards excellence. In most of all of human history, there that has been a path to it, but there's been a lot of other stuff that we have to do in the day. These days that includes reading and understanding email, uh, and a lot of it. Um the future where AI it breaks our way and amplifies all these things, frees up so much more space for every single individual to find more and more of those intersections within themselves and therefore continue to rise uh uh as individuals and society and just make greatness uh that we can't even imagine right now.

SPEAKER_04

What about you, Traparna? What does the future of positive AI hold for us?

SPEAKER_00

The phrase that comes to my mind is problem solving. Like the goal of any tech is going to be problem solving. What we could have done in years, AI is potentially doing in days. But but I think where we are right now is figuring out how best we can use AI to problem solve. That's that's the gap that you guys are working on. That's the gap that I'm thinking about. That's the gap that all of us are questioning and thinking, right? And debating. So I would say personally, AI can unlock a lot for you. You need to, you you need to be able to start at some level. One percent increase every day, one percent use every day. And then at some point you will feel you're comfortable with AI, where AI is able to add a lot of value to you beyond automation. And that's a great unlock to start with already.

SPEAKER_03

Yep.

SPEAKER_02

Love it.

SPEAKER_04

We love it. Keegan, do you want to do our lightning round questions?

SPEAKER_01

Yep. All right. So uh five questions, super quick. Uh, you should be able to get one or two sentence answers in both of them. Okay. Uh you ready?

SPEAKER_00

Are they like general knowledge questions or yes, yeah, yeah.

SPEAKER_01

No, this this is not a quiz. Uh what was your first job?

SPEAKER_00

LNT Technology Services in India.

SPEAKER_01

Love it. What's one leadership skill that you wish you learned earlier?

SPEAKER_00

Hmm, that this is an interesting one. Um, I would say difficult conversation.

SPEAKER_01

Good. Uh what is the biggest myth about leadership?

SPEAKER_00

That you need to be uh Zazzy and extroverted and bold.

SPEAKER_02

Yes.

SPEAKER_00

You don't need to be that to be a leader.

SPEAKER_01

Excellent. Uh what is the worst professional advice you've ever received? You do not have to name name.

SPEAKER_00

The worst one I would say is um to just wait it out. Um, you know, like don't try to change things too quickly. Just wait it out, maybe a few months in. And I immediately think of impact. The impact of weight, how does that play out for a company that's trying to grow? Right? So, yeah, waiting it out, it depends. Maybe it works in some situations, but I believe in intentional action.

SPEAKER_01

Yeah. All right, and wrapping it up, what's a podcast or book that you'd recommend that is not leadership or business related?

SPEAKER_00

Okay, um This is a tough one, Keegan. A really, really tough one. Um I think a podcast, um a great a great podcast would be any of uh Tim Ferris' podcasts.

SPEAKER_02

Sure.

SPEAKER_00

Um there are there are ones on um how to find meaning in life, but there is also about very practical, you know, like steps you can take to be physically healthy, mentally strong, uh resilient. I think that's that's something that I have been working on too. A book um that I want to read, I have kind of shelved it in between, that I want to complete is um Sapiens by Uval Noah Harari.

SPEAKER_01

Oh, yeah, yeah. Yeah. Uh Traparna, thank you so much for joining us and for such incredible insights uh and perspective. Uh we will have it in the show notes. Follow it, follow Traparna on LinkedIn, reach out. Anything else that you want people to uh find you on?

SPEAKER_00

I just want to say, you know, thank you for inviting me. Yuda is doing some fantastic work out there, and Keegan's um the conversations I've had with Keegan's have been very enriching. Um and you are a great coach, Keegan. Um, thank you. So I just I just want to say I truly enjoyed this conversation.

SPEAKER_01

Likewise. Yes, thank you. And as always, follow uh check. You can find us wherever you find podcasts, uh LinkedIn, YouTube, Spotify, we'll be all we'll be all over the place. Uh and we'll see you next time on the next uh on the next episode.