Masters of Automation

View Original

How to win with AI w/Prof. Tom Davenport

Thomas H. Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, a visiting scholar at the MIT Initiative on the Digital Economy, and a senior adviser to Deloitte’s AI practice. He is a co-author of All-in on AI: How Smart Companies Win Big with Artificial Intelligence (Harvard Business Review Press, 2023).

He has written or edited twenty books and over 250 print or digital articles for Harvard Business Review (HBR), Sloan Management Review, the Financial Times, and many other publications. He earned his Ph.D. from Harvard University and has taught at the Harvard Business School, the University of Chicago, the Tuck School of Business, Boston University, and the University of Texas at Austin.

Listen to the full episode :

Your browser doesn't support HTML5 audio

010 - How to win with AI/Prof. Tom Davenport Prof. Tom Davenport, President's Distinguished Professor

In today's episode, I had the pleasure of speaking with Professor Thomas (Tom) Davenport, an expert in artificial intelligence (AI), analytics, knowledge management, and automation.


One of HBR’s most frequently published authors, Tom has been at the forefront of Process Innovation, Knowledge Management, and Analytics Big Data movements. He pioneered the concept of “competing on analytics” with his 2006 Harvard Business Review article and his 2007 book by the same name. Since then, he has continued to provide cutting-edge insights on how companies can use analytics and big data to their advantage and then on artificial intelligence. Tom’s book, co-authored with Julia Kirby, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, offers tangible tools for individuals who need to work with cognitive technologies and in his latest book, The AI Advantage: How to Put the Artificial Intelligence Revolution to Work, he provides a guide to using artificial technologies in business.

We covered a wide range of topics, including the adoption of automation and analytics, the rise of generative technology, the cultural challenges of enterprises adopting change, and the emerging field of citizen development and citizen data scientists. Professor Davenport shared his insights on the future of AI, the potential impact of deep fakes and disinformation, and the importance of regulation. He also discussed his own experiences with generative tools such as Dall-E, and the evolving role of human editors in a world where AI-generated content is becoming more prevalent. It was a fascinating discussion that provided valuable insights into cutting-edge of AI and its impact on society. We highly recommend tuning in to listen to this insightful conversation with one of the foremost experts in the field.

Some questions we discussed:

What inspired you to pursue a career in analytics and data science, and how did you get started?

You've written several books on the topic of analytics, including "Competing on Analytics" and "The AI Advantage." What led you to focus on these areas, and what insights have you gained from your research and writing?

How do you see the field of analytics evolving in the coming years, and what opportunities and challenges do you anticipate? In your opinion, what are some of the most important skills that aspiring data scientists and analytics professionals should cultivate?

Many organizations are struggling to leverage their data assets effectively. What advice would you give to leaders who are trying to build a data and AI-driven culture within their organizations?

You've also written about the ethics of AI and the potential risks associated with these technologies. What are some of the most pressing ethical concerns related to AI today, and how do you suggest we address them?

Finally, what advice would you give to students and young professionals who are just starting their careers in analytics and data science, and what qualities do you think are most important for success in this field?

Alp Uguray, Host & Creator: Welcome to the Masters of Automation podcast series. In today's episode, we have Professor Tom Davenport with us. Welcome professor, it's a pleasure to have you join. 

Prof. Thomas (Tom)  Davenport: Happy to be here, thanks for having me. Thank you. 

Alp Uguray, Host & Creator: Tom Davenport is the President's Distinguished Professor of Information Technology and Management at Babson, the co-founder of the International Institute for Analytics, a fellow of the MIT Initiative for the Digital Economy, and a senior advisor to Deloitte Analytics. He has written or edited 20 books and over 250 print or digital articles for Harvard Business Review, Sloan Management Review, the Financial Times, and many other publications. He earned his PhD from Harvard, and has taught at HBS, University of Chicago, the Tuck School of Business, BU, and University of Texas in Austin. One of HBR's most frequently published authors, Tom has been at the forefront of the process innovation, knowledge management, and analytics and big data movements. So he pioneered the concept of competing on analytics with his 2006 Harvard Business Review article, and his 2007 book by the same name. Since then, he has continued to provide cutting edge insights on how companies can use analytics, big data, and AI to their advantage. And Tom's book co-authored with Julia Kirby, Only Humans Need Apply, Winners and Losers in the Age of Smart Machines, offers tangible tools for individuals who need to work with cognitive technologies. So we have a very special guest with us today that we can dive into different areas and topics, starting from AI, big data, analytics, knowledge management, and such. But I'd like to kick things off by asking you the first question, what inspired you to pursue a career in analytics and data science, within both academia, as well as then transitioning a little bit, collaborating with more in the private sector as well? 

Prof. Thomas (Tom)  Davenport: Sure, well, I was in graduate, even late undergraduate and throughout graduate school, I was a sort of a consultant to people doing statistical computing for their academic work. I sort of paid my way through graduate school that way. And I liked it a lot. I got more and more, I was a sociologist by academic background. I got more and more interested in computing. But then I got into, when I became a consultant, I got into business schools, I sort of left analytics for quite a while, and I was doing stuff like, I don't know, business process re-engineering and knowledge management and so on. And I remember I was doing some knowledge management work, and it's kind of the steam was going out of that area a little bit. And I thought, you know, there's this whole area of knowledge that's derived from data that we don't really focus on enough. This was kind of in the business intelligence era. So I fortunately got some money from SAS and Intel to do some research on their customers about what they were doing with business intelligence. And it turns out they were doing more than that. That whole competing on analytics idea came out of there. And then that sort of evolved into big data and that evolved into AI. So that's kept me going for, I don't know, 20 something years now in one aspect of it or another. And I think I have eight or nine books. I don't know exactly how many, but more than anybody would want to read, certainly. There's definitely plenty of books and as well as articles as well. So throughout your 20 years, I mean, you've seen the knowledge management, the data starting and then leveraging analytics to drive some predictions on that data, some user behavior and whatnot. But then it goes to the more artificial intelligence to better understand the pattern and then build application layer on top of that. So throughout your research, what were some of the insights that you had that first you told some enterprises and customers weren't able to adopt quickly or maybe resistant to change a little bit? And then what were some that were quick to adopt and then how different was their mindset? That's a good question. You know, I think you could argue, I started really around 2000 with an article called Data to Knowledge to Results, building an analytical capability and nobody paid any attention to that at all. I don't know if it was not as widely read a journal maybe, or I don't know, maybe the timing wasn't right. But when I started writing about how companies compete on analytics, there, I think they were quite interested in doing more with analytics, but it's always been a challenge to get companies to compete on analytics or to compete on AI. My most recent work is basically about competing on AI, although we didn't really call it that, the book's called All In on AI. And I think, you know, companies aren't really ready to go all in on anything that they don't fully understand. So it's been a struggle, I think, to get the competing and the strategy-oriented approaches adopted, but certainly there's been a huge increase in the number of companies that are doing more with analytics and have chief analytics officers and AI experiments anyway, if not large-scale commitments.

Alp Uguray, Host & Creator: So I think it's a matter of just how much scale they're willing to adopt, how much they're willing to commit to this, that's the organizational change challenge. You think like as they take a lot of time to adopt in their change management, as well as structure, how they can adopt the technology, new startups are forming, you just address that niche?

Prof. Thomas (Tom)  Davenport:  Yeah, yeah. You know, I think that's the big issue, these legacy companies, which I've always been quite interested in because of the challenges of transforming them into some organization that can compete with startups, but if they fail, then they go out of business or lose market share or whatever to the startups. And there are, you know, I've had a fair amount of exposure to digital native companies over the years, and the big difference between them is, and the legacy companies is, you don't have to persuade them that analytics are important. You know, you don't have to tell them that they should do stuff with analytics and they don't even, I mean, not everybody, I've done some work, I've had some students from Google and so on, and they were in parts of Google that were not terribly analytical, but in general, you know, the company is extremely analytical, same with Facebook and Airbnb and Uber and so on. And you don't have to persuade them that they should do something, they're doing a lot, they don't even have a need for chief analytics officers or whatever, it's just kind of pervasive in those companies. So that is the big difference, and, you know, I would like it for these businesses that have been around longer than Silicon Valley to be able to evolve quickly into a form that will, you know, last over time and let them compete with those digital native companies, but not everyone seems up to the task. 

Alp Uguray, Host & Creator: How do you see the culture of the companies impacting this? Because I think the startups because they're small, they're able to move fast, and obviously, Google and Facebook are massive right now, but like it, some of the roles and tasks to do are already embedded in the roles. So they don't need someone else to drive this new initiative and whatnot. Whereas in an enterprise that has been doing this maybe 300 years, 200 years, or like they have systems from the 1950s, legacy applications, and whatnot, so they will need to really change the way people think, the way people approach the problem solving to adopt these skills then. So what are your thoughts about the company as a culture, but also the talent pool within these two different companies? And then what kind of digital skillset can they adopt? 

Prof. Thomas (Tom)  Davenport: Yeah, I mean, I think it is largely a matter of culture, and I've done some surveys, usually with a guy named Randy Bean at New Vantage Partners, every year he's been doing a survey of kind of chief data and analytics officer types. And every year, we ask, do you have a data-driven culture? Are you a data-driven organization? And it doesn't get better. I mean, it's typically maybe 25% say they do, and these are, a lot of them are financial services companies who think they have so much data that they would be highly analytical. So, I think a lot about why that is, why even the numbers have gotten worse over the past decade in some cases. And I think companies don't have very many initiatives to change the culture in that direction. Obviously, they spend a lot of money on technology, and there's this kind of feeling that if you lead the horse to water, it will drink, but that often doesn't happen. And so, I think we need more explicit initiatives to address culture, but that probably won't happen unless the CEO is already a believer. If the CEO is a believer, then maybe they're moving in the right direction already culturally, but I think it comes down to culture, and what drives culture, of course, is the desires of the senior management team, particularly the CEO, but probably broader than that. And that would make sense, I think, like similar to other initiatives, if the directive comes from top to bottom, and it's faster, they'll adopt easier not to challenge that approach. Yeah, and certainly, there are some companies that I've worked with that the CEO was a big believer in, and for a while, they did all sorts of fantastic things. I have a long-term friend, he was a neighbor at one point, Gary Loveman, who was a Harvard Business School professor whom I first got to know him. Our kids played baseball together. Then he left Harvard and went out to first Harrah's, and that became Caesars. He became CEO and was the world's largest gaming or gambling company if you prefer that term, and they were hugely analytical because he was an absolute believer that analytics was going to make them successful. And he hired people, and he fired people based on how analytical they were, but then he left, and it kind of devolved back into the typical gambling company in terms of its orientation to analytics. Some did go away totally, but not nearly as strong as it was. So I think you've got to build it so that it's not just in the minds of one CEO or one senior executive, something that really becomes well-established throughout the organization. 

Alp Uguray, Host & Creator: I think as part of the people adopting certain skills and then the change, there's also the, in the book, the power of habits highlighted really well, especially how people do the trigger action and reward mechanism. And then there were some examples around Caesars in the book as well, like how they interacted with the gaming to make it more addictive. But also today's technologies and enterprises leverage similar types of habit-building concepts across their platforms, including Facebook, everyone knows at this point right now. Yeah, that habit. So it's even more widespread. So like, analytics and AI can drive these habit-building applications that can be both detrimental and beneficial for the user. Based on your experience and exposure, what do you think? 

Prof. Thomas (Tom)  Davenport: Yeah, no, I think there is some potential there, and we've seen how well it works to inculcate bad behaviors in social media-oriented companies. And in my latest book, we also looked at companies who were attempting to create more positive types of behavior change, again, through the combination of analytics and behavioral economics type nudges. And most of them were insurance companies. And I think they finally took a while, but they sort of realized why we should pay people when they get sick. Maybe we could try to make them pay more or try to make them be well more of the time. And so we talked about Anthem and some other, Manulife in Canada, a number of other companies trying to use data and analytics to nudge their customers into better health behaviors. And then I realized at one point, that's what Progressive has been doing with this usage-based insurance, not only making you pay more, which that's certainly a part of it if you're a bad driver, but also telling you if you're a bad driver and trying to get you not to be a bad driver. And in fact, Gary Loveman, the former CEO of Caesars, is now in a company; he tried to do this at a big insurance company, I won't say who it is, but it was just way too expensive, even to add your cell phone number and your email to their customer database, they said it was gonna cost tens of millions. So he said, okay, I gotta do this as a startup, and it's called well, and it's using nudges and a lot of analytics and AI to create better behaviors. So I think there are some possibilities there, still early days, we don't really know if that is going to work, but we might as well be trying for good behaviors rather than bad. Yeah, definitely. And then in terms of like, as the companies adopt this, habit-building approach, there's also the regulations, and then the ethics teams are forming within the enterprises to help regulate some of those activities. But then they fire all the ethics people, which seems to be happening this week at Microsoft, or at least all the centralized ethics people, and it happened at Google and so on. I don't know what's going on there. 

Alp Uguray, Host & Creator: Yeah, it's definitely, like, some part of it is more for the eyes that, like, look, we're doing something because we have a team, and some part of it is the government's imposed that, and we have to have this team, but we can reduce their headcount. And then the third part is, I think this will be the more idealist in me, they are actually collaborating very together and then building those habits, habit-building apps. 

Prof. Thomas (Tom)  Davenport: Yeah, I've been kind of disappointed in these vendors because I wrote something about Microsoft a few years ago and I thought they were quite focused on ethics and so on. And now it's sort of, and we don't need the centralized ethics team anymore. And I think I do. Well, I think some of the stuff that OpenAI has done is quite impressive. I don't think they should have just turned it loose on society without making it work a little better just so they can be ahead of Google or whatever to try to bring some new market share to Bing. So I'm a little disappointed in the whole industry, frankly. 

Alp Uguray, Host & Creator: I've seen in an interview that Sam Altman had; apparently, they have a more ethics team or a team that's more interactive with the way people can work with chat GPT and generative tech technologies early on to be able to make then it more friendly and less toxic, OpenAI, which is interesting compared to how the others were approaching it. 

Prof. Thomas (Tom)  Davenport: Yeah, I mean, I think they've worked a lot on making it less toxic, and Microsoft should have learned a big lesson there with Tay a few years ago, which became toxic really quickly in the US version, and the Chinese version did not become toxic, which probably tells you something about the different cultures of the two countries. But yeah, I think, and I do understand, that having people try things out is really the only way you learn fully is this useful and helpful or not, but I think it came a little fast, and they changed their perspective. They did not release GPT-2 broadly, and they were quite conservative about releasing GPT-3 broadly, but chat GPT and GPT-4 now, which was announced yesterday, they're much less concerned, apparently. 

Alp Uguray, Host & Creator: I think there was. Also, it was interesting to see a chatbot succeed as well, because like chatbots used to be an app that everyone hated like everyone would say, connect me to a representative immediately after speaking in the chatbot. Press zero as quickly as possible or whatever. 

Prof. Thomas (Tom)  Davenport: Yeah, I think there are significant possibilities for specialized chatbots, not just the kind of generic ones trained on the internet, but specialized ones. And a number of the companies that I work with are trying to sort of fine-tune train the latest generation of generative tools on their own content. In fact, I'm supposed to have a call today with Morgan Stanley, who is doing this. A couple of insurance companies are thinking about insurance transaction-oriented chatbots that would help. But it's still early days. I don't think we know yet, will all those crazy things that make their way into. ChatGPT conversations, will that end up being something that happens in a chat about, you know, making a hotel reservation or whatever. I think there are ways to avoid that in fine-tuned training, but I'm not entirely sure yet. I'm trying to talk to companies about it. 

Alp Uguray, Host & Creator: What was the first thing that came to your mind when you saw it happening? I mean especially building up on your recent book, leveraging AI, when you saw the generative tech coming up. Obviously, excluding the way the public reacted. What was your personal impression of it? 

Prof. Thomas (Tom)  Davenport: In my latest book, we talked about it a moderate amount, and it's really mostly about code generation because Deloitte was working with OpenAI on trying out how well it worked for code generation. There had been some other articles in the New York Times and other places about code generation. You know the GitHub copilot and OpenAI capabilities to do that. I forget what they called it. But there wasn't much in the world yet about the sort of conversational stuff. The Facebook conversational tool was not very good. I had tried out GPT-3 for, you know, writing paragraphs and so on, and it was amazingly good, I thought. But the only thing I was really surprised about was how quickly it was adapted to the conversation. And then, I was surprised at some of the crazy emotions I had experienced. Love, lust, hatred. It didn't take long until all of the emotions in humans made their way into chat GPT. 


Alp Uguray, Host & Creator: Yes, and I think it has like 60 billion data points and whatnot, so it probably has all the emotions at once. 


Prof. Thomas (Tom)  Davenport: 175 billion parameters and train on over 500 billion pieces of text. So it's not surprising that some of them were toxic. And I don't know, frankly, how you, other than, I guess, feeding it a bunch of prompts and seeing what comes out. Nobody understands what's inside these models, which is kind of a scary part and maybe the magical part as well. But I don't know you know how you remove a toxic parameter or two or twenty or whatever out of 175 billion. I think that would be quite challenging. I'm glad that's not my job. But I hope somebody is trying to figure that out. 


Alp Uguray, Host & Creator: I think it's really tough. Yes, that will lead to more moderation and an ability to produce more positive content and whatnot. Like I think right now, many companies are looking to leverage these models and then build an application layer on top of these models and have the users on track with that application interface. So I've seen the Duolingo use case. I've seen a few other use cases that I use. 

Prof. Thomas (Tom)  Davenport: Yeah, one of our friends in Boston at what's the outbound CRM HubSpot also has one like that. It seems quite effective. 

Alp Uguray, Host & Creator: I believe it was very interesting at the HBS tech conference two or three weeks ago. The CTO of HubSpot was there as well, and he briefly talked about it around like now he was saying like, essentially, this is the number one topic that is keeping him up at night because if somebody goes ahead and builds a product that can compete with HubSpot, then they will maybe essentially tying to our previous points like on losing that competitive edge. And even though they like to call themselves a startup after 10 years plus, they're a big company now. 

Prof. Thomas (Tom)  Davenport: Yeah, yeah, and Salesforce just announced that they have an interface using GPT, and I think maybe one of the more pervasive uses of this technology will be the, as you suggest that kind of front-end user interface for almost every piece of software. 

Alp Uguray, Host & Creator: What were some of the use cases that you saw in the like as a professor that could be adopted like grading homework, helping writing with papers, or whatnot? 

Prof. Thomas (Tom)  Davenport: Yeah, grading would be nice. I don't know, I haven't seen anything like that. But I think we're sort of many educators are sort of missing out on the potential of this technology, and it's kind of like the reaction that people had to use calculators in math and how they were prohibited. And now, of course, who really remembers how to do some complicated long division? It's I mean I think it's nice to have it in books in case you need it but you don't really need to practice it very much. And I think that we should be encouraging students to try it out and, of course, edit it what they what it comes out with I think it will be this is going to be the way we work, and we might as well get students interested in that. So in my AI class, I'm definitely going to have students write something using maybe I'll let them choose which I'm not teaching till the summer I haven't decided for sure what let them choose which generative tool they use and then edit it and you know change their prompts and go on and I'll grade them on what came out. Of course, you should tell your teacher or your boss or whatever you're using this tool, but this will be a big competitive advantage for any knowledge worker who knows how to do this stuff. 

Alp Uguray, Host & Creator: And this is the one I think they have. They will develop some capabilities, I believe Stanford did, to help out with what content is actually written by the generative tech to be able to do cross-verification. I'm not sure if it works well I'm sure it has 80% and whatnot and also predicts the rate and then those texts, but it's an interesting concept. 

Prof. Thomas (Tom)  Davenport: Yeah, and a Princeton guy had a tool to identify whether something was generated. I, you know, it's going to be sort of a battle, I think, between the generative AI tools and the tool to detect generative AI, but in the long run, I think if we're not worried about how high-quality content is produced, then I think we'll be better off. But for the moment, at least, that means much tinkering with different prompts and doing different edits when it comes out.  And I've also, by the way, I think you used to work with Ian Barkin, right? Symphony and so on. He was on your podcast, wasn't he? 

Alp Uguray, Host & Creator: Yes, yes, he was on my podcast. He was one of the first ones. 


Prof. Thomas (Tom)  Davenport: Okay yeah, he's a great guy. And we're doing some work together on sort of citizen data science and automation, and so on, and so he did some research investigation using, I think it was chat GPT, and it did a really good job, actually. It summarized many things that had been written about that topic, even some articles behind paywalls. I don't know how it does that, but it's great. We would never have found that, probably. 

Alp Uguray, Host & Creator: Yeah, that's really cool and especially the access that it has. You'd be able to say here's a URL in some URLs. Read about what is in there, then summarize it for me. 

Prof. Thomas (Tom)  Davenport: Yeah. In some of them, it works. Yeah, no, I agree that's a great capability. I haven't tried I have access to chat to GPT-4. I haven't tried that yet, but I guess the Bing version of GPT is based on GPT-4, so I don't know. And to your point, so like you, you're thinking about exploring the citizen data scientist, maybe citizen developer concept more. What are some of the areas that you guys are looking to explore that can help the industry to go forward? Again, there's a big organizational change issue, and I think companies are often reluctant to embark upon a citizen journey for no great reason. Sometimes IT people object, sometimes for good reasons, sometimes for bad. Maybe they are afraid they'll lose their job, or maybe they just don't want to spend all their time cleaning up the bad project work that citizens have done. But in the companies that do this well, there are a few key attributes. Ian has persuaded me, you know you have to do recruitment well, and I found that I have found this in the citizen data science space that some companies have really struggled with finding people who want to be citizen data scientists even though they're wanting to train them and so on. There you have to think about tool-related standards and so on. There's the issue of do you have a special version of that tool like Studio X is that the UiPath end-user version, or can you just turn them loose on, you know, the regular offering, and those are even the regular offerings are getting pretty easy to use these days, particularly in the automation space. Then you sort of need maybe some certification and training or probably training first and certification. There's a sort of a community-building component to it where you have people meet occasionally and share their ideas and so on. And then there's, I think, an infrastructure component where you say, in the data science space here, a bunch of pre-engineered features, and we'll put it in a feature store and automation you can have a little hub of the automation, I guess. I never really like that as a plural noun, but all of the ones have already been developed so people can access them and save time. So I think there is a fair amount of work that an organization can do to make citizens much more likely to be successful, but the biggest issue is many of them just don't try it at all. 

Alp Uguray, Host & Creator: Yeah, and then they will not build the guard rails as well. I see community as being very impactful to knowledge exchange, so if somebody has any questions, this happens a lot in even advanced developers; they typically go to Stack Overflow and then copy and paste into their canvas. And I think for the first citizen developers, it's more on the low code and side, and then they will also have similar questions on how to tackle these things, but if there's a community where the network effects are enabled, that can really help people to go forward. 

Prof. Thomas (Tom)  Davenport: Yeah, and I think generative stuff will enter all these spaces. It's going to make God knows it's going to generate a lot of program code, it's going to be the interface for a lot of these tools, it's going to make it much easier for citizens to do this work. Ian found some interesting things on I don't know Reddit and other discord about people holding down multiple jobs because they've automated the task in one job, and then they have time to do the things they need to do as a human in the other jobs. Maybe they've automated two jobs, and they hold a third job or whatever. It's called over-employed. 

Alp Uguray, Host & Creator: It's enabling those automation to do the work. 

Prof. Thomas (Tom)  Davenport: Exactly. And that is a good way to let you know if you want to make some money from automation, you can start piling up those jobs and bring down a fair amount of cash. Yeah, that's really like maximizing productivity at best. Yeah, individual-level productivity, if not organizational-level productivity. 

Alp Uguray, Host & Creator: Yeah yeah, and in terms of right now building up to this, what do you think is waiting for the world within the next four to five years now based on why you see, and maybe even you can allow some utopic thinking as well? 

Prof. Thomas (Tom)  Davenport: Yeah, I mean, I, in general, have been quite positive about AI, and I wasn't terribly worried about it becoming our robot overlords and killing us all. Certainly, now I'm a little bit more concerned now that I see what the generative tools can do, and I think we probably need to get a bit more serious about regulation certainly in the US. I think the US, I mean the EU is well ahead of us in that regard, and it could, I think somebody said on a podcast I was listening to the other day it could be a thermonuclear bomb of disinformation, and that could have some really serious consequences you know that right now that deep fakes are not very good, but I think before long they're gonna be really quite good and there are so many things that could go wrong there. So I think we need to apply the brakes a little bit from a regulatory standpoint which you know I didn't think that way until recently and I think for me the biggest advance from some of these tools will be going back to what I did 20-25 years ago in knowledge management to really manage the knowledge of an organization so that people can access it easily without necessarily having to you know put it into Lotus Notes or SharePoint or whatever and that's you know it's not going to be easy to do all this fine-tune training and I think there will be different levels there'll be I was talking to a law firm the other day about this and they said well yeah there's the basic large language model and then there's a there's already a couple of versions of law oriented generative tools below that but then you have to say well okay UK law this was a UK lawyer I was talking to UK law is different from US law so we sort of have to have different versions for that and real estate law is different from I don't know securities law or family law or whatever so we have to have different versions and then you have maybe versions for each specific law firm so we're going to be proliferating a lot of generative tools around that may get a little confusing at times but I think it does mean that we'll have access to the Morgan Stanley people were saying they trained GPT-4 on a hundred thousand of their documents and it makes it possible for everybody to sort of have the resources the intellectual resources of the entire firm at you know their fingertips with a with a prompt and so that's very exciting I think and who knows what it'll do I'm trying to get that woman they took this website CNET and they fired a number of the journalists and they made the editor no longer she's no longer the editor she's in charge of AI now so when you talk to me about this and she said yes but let me be in my job for  a month or so I'm trying to figure out what I'm supposed to do so it's gonna I think need a lot of changes in organizations and more and more I'm persuaded that the the people who have to fear have something to fear from all of this are the people who refuse to work with AI to experiment a little bit with with the generative tools to edit AI systems outputs to tinker with prompts you know etc. I just think everybody's going to be working with AI before long if you're a knowledge worker you know maybe even that'll be true of robot ditch diggers and so on as well but hasn't been that way thus far yeah I think the human in the loop will become the norm for for many of the enterprises and as they track with the software and there will be also differences between somebody who is able to leverage the degenerative tech and automation at large versus somebody who's not and in that productivity level and then you touched on that really well somebody who's doing already four jobs and then one day versus somebody who's barely doing one job in a single day and maybe they're the identical tasks that will definitely be a game changer moving forward from the people's yeah and you know I still sort of feel that if you don't need a human in the loop it's probably not an interesting enough job for a human to be performing anyway although you know there's some people who don't like to be challenged much at work and who don't happy doing the same thing over and over again I think for them it's gonna be tough but if you are open to new ways of working and dealing with technology and so on I think you'll be fine for the next you know until the singularity happens and then all bets are off.

Alp Uguray, Host & Creator: Yeah, that is very true, and I've seen that Sam Altman's new company is trying to also tie to the identity of content, and then I think like they do an iris check on human verification to make sure so it's no longer gonna be choosing pictures and finding traffic lamps maybe at some point in the future. 

Prof. Thomas (Tom)  Davenport: Yeah yeah, well, I hope so, and I think we'll need some kind of watermark approaches I use Dall-E a fair amount for image generation I've replaced my possibly illegal Google images that I put into my PowerPoints and so on with mostly with Dall-E too now, and I love the fact that there's that little multicolor watermark in the corner and I think we need to find ways to do that for text as well and then you know maybe people won't be as worried about it and it'll say you know some of this text was generated by such-and-such a program or whatever but it was edited by this human or whatever it does also mean I think though that editing skills become more important than first draft capabilities and I think most of us are good at one or the other I hate editing I wish I were there an editing system that did a good job rather than a first draft system that's the thing that I do well but maybe there'll be both at some point I don't know, but right now you really need to be good at it. In terms of, and this is more in the theoretical thinking, but more I look at it on the AI generating creating content with humans building paragraphs to codes to images and then on the other hand, there was the promise of blockchain at some point to be able to track things down to be able to make everything recognizable do you see any intersection point that both technologies can complement each other obviously there's the like Bitcoin market and exchanges there's the NFT so like going beyond all of that and just as the technology itself of the promise that things are transparent and trackable. Yeah you know I never really focused all that much on blockchain I've always been more of a kind of what you do with information kind of guy that a transaction oriented person a transactional technologies I mean which I always thought blockchain was and I I must say I was puzzled a long time ago of if this is such a great way to sort of protect individual ownership and so on why do we have so many frauds and breaches and hacks in this space I still have yet to get a good explanation of that and unfortunately I think a lot of the exploration that companies were doing with blockchain you know that for example Maersk the big shipping line had a thing for keeping track of all their containers and so on they're working with IBM and they dropped that project so I don't have really high hopes at the moment for rekindling it quickly I think there's something that needs to change in the technology to make it safer than it's turned out to be and to bring about that clarity of creation and ownership and so on that we were promised in that in that area. I think the the model layer needs to really keep up with their quality to be able to allow then more use cases to come out and more tangible use cases that their businesses can adopt. 

Alp Uguray, Host & Creator: We are now at a time. Thank you very much for joining the discussion I really enjoyed what we talked about I think we covered a lot of topics that are going to be very impactful on the lives of people in the future, especially the adoption of automation analytics adoption of generative tech some cultural aspects of enterprises to adopt the change and as well as citizen development citizen data scientists with ties to the low code this was a very very insightful discussion, so I really appreciate you taking the time and chatting with me. 

Prof. Thomas (Tom)  Davenport: Thanks for not using the letters RPA is that allowed on this podcast? 

Alp Uguray, Host & Creator: Yeah, that is true!! Nice talking to you.