< Back to all episodes
Kelly Fitzsimmons, Sarah Di Troia and Sam Caplan
Project Evident’s founder and CEO Kelly Fitzsimmons and CIO Sarah Di Troia lay out a vision for how AI can bring the social sector together to create more effective programs.
Project Evident’s founder and CEO Kelly Fitzsimmons and CIO Sarah Di Troia break down how AI is opening up new possibilities in the social sector.
This episode of the Impact Audio podcast features Project Evident’s founder and CEO Kelly Fitzsimmons and CIO Sarah Di Troia. They dig into how AI is empowering the social impact sector and what practitioners need to do their best work.
They break down:
Why “do no harm” is not a solid AI strategy
How evidence powers innovation
Why AI requires a new level of collaboration for the social sector
Kelly is a committed social innovator. Previously, she served as Vice President/Chief Program and Strategy Officer at the Edna McConnell Clark Foundation (EMCF), where she led policy innovation, evaluation, grantmaking, and the early capital aggregation pilot. Prior to EMCF, she co-founded Leadwell Partners and New Profit Inc., and held senior leadership positions in nonprofit organizations and served on several foundation and social sector boards and advisory committees. Kelly currently serves as a Leap Community Ambassador and is a member of Results for America’s Invest in What Works Federal Standard of Excellence Advisory Committee and EDSAFE AI Alliance’s Steering Committee. A graduate of McGill University in Montreal, Fitzsimmons holds an MBA from Boston University.
Sarah Di Troia has dedicated her career to analyzing the mechanics of growth and supporting nonprofit practitioners to scale their impact to individuals and communities. Her experience as an investor, advisor, and leader fuels an approach that integrates with the internal change management necessary to realize new opportunities. Sarah has spent the last five years researching AI and partnering with early adopters, including funders and practitioners, to help realize the potential of this new tool in enhancing productivity and outcomes. Before joining Project Evident as Chief Innovation Officer, Sarah served as Chief Operating Officer at Health Leads, Managing Partner at New Profit, Inc., and an Associate Director at the Center for Effective Philanthropy. She earned her MBA from Harvard Business School.
Sam Caplan is the Vice President of Social Impact at Submittable, a platform that foundations, governments, nonprofits, and other changemakers use to launch, manage, and measure impactful granting and CSR programs. Inspired by the amazing work performed by practitioners of all stripes, Sam strives to help them achieve their missions through better, more effective software.
Sam formerly served as founder of New Spark Strategy, Chief Information Officer at the Walton Family Foundation, and head of technology at the Walmart Foundation. He consults, advises, and writes on social impact technology, strategy, and innovation.
Connect with or follow Sam on Linkedin, listen to his podcast Impact Audio, and subscribe to his bi-weekly newsletter The Review.
Episode Notes:
Follow Kelly on LinkedIn
Follow Sarah on LinkedIn
Learn more about Project Evident
Check out the “Responsible AI Adoption in Philanthropy" framework
Read the Funding the Future report
Learn about the EDSAFE AI Alliance
Read Opportunity at Scale, the white paper about the necessary role of public infrastructure for R&D in education
Get the AI Grantmaking Rubric
Learn more about the Stanford HAI + PE Social Sector AI Opportunity Gap
Transcript:
This transcript was automatically generated.
Evaluation versus innovation. In social impact, we often put these two concepts in opposition to each other. You're either committed to the slow, overwrought process of evaluation or you're living on the cutting edge taking big risks. But the truth is a lot more nuanced. Recently, I picked up the book Next Generation Evidence, which is a collection of essays and case studies on what it would take to create a more equitable data and evidence ecosystem.
And within the first few pages, the authors broke down the false binary of evaluation versus innovation.
They explore how evaluation and innovation are deeply intertwined. You can't have one without the other.
The book is an expansive look at how we understand data in the social sector. And it's also a reminder how easily false binaries creep into our work and make a home where they don't belong.
Welcome to Impact Audio. I'm Sam Caplan, vice president of social impact at Submittable. Today, I sit down with Kelly Fitzsimmons and Sarah Detroyo from Project Evident, the group behind the Next Generation Evident's book. Kelly is the founder and CEO of Project Evident, and Sarah is the chief innovation officer and managing director of their outcomes AI work. Together, they've been helping organizations of all shapes and sizes, from small nonprofits to school districts, build and use data evidence for greater impact.
They have great insights about what the social sector needs right now and how AI fits into the work.
Kelly, let me start with you. How about just a brief introduction?
Tell me about you, kind of what brings you to this work, and I'd love to hear a little bit about the organization that you have founded, Project Evident.
Sure. Sam, just happy to happy to dive in and and happy to be included in this in this conversation.
I have been a traveler in, I'd say, the social sector, the social innovation space for three plus decades, at this point, engaging from different purchase, whether that's in direct service, whether it's in philanthropy, whether it's in consulting kind of technical assistance.
And every single time, the motivation for me is ultimately about how do we make things better for practitioners? How do we make things better for those who are most committed to the work of social change, who are most committed to ensuring that what they're doing is resulting in outcomes as intended for students, for program participants. And that's what gets me up in the morning.
And what led us to start Project Evident was at a period of time when I was at a large national foundation that was making a significant, strategic change, at the heels and the end of the Obama administration. We'd been actively involved in evidence based policy and social innovation, and fundamentally starting to see things I couldn't unsee.
And that was really about the work of outcomes that we the work of impact that we care deeply about, yet the experience of that work was less than. It was a more negative than a positive experience. We our peers and colleagues who were working on innovation or working on product development in other sectors were very, you know, naturally supported to engage in r and d, to engage in continuous learning, to engage in more optimistic and positive rather than being in a thumbs up, thumbs down construct of did this thing work or not, and at the same time, really seeing a loss of connectedness between data, technology, and evidence.
So when I was seeing these things and recognizing in my own work in philanthropy that I may not have been optimizing the role of philanthropy and making these investments in supporting practitioners, I wanted to explore more what I was seeing, and it was a no brainer for me to reach out to Sarah Detroya, who I had long known for a good part of those many decades I've been in this work.
I I wanna hear from, from Sarah as well, but can you just go go a little bit deeper, Kelly? When you say, like, things that you can't unsee, that frightens me. So tell me a little more, like, you know, give me an example or two of what you're referring to.
Absolutely.
So I think that there were unintended consequences of a lot of the infrastructure, the structure, the norms that have been stood up in, evidence based practice and how it is we discern what counts as evidence, how we think about tiered evidence frameworks or who merits what or what are really the ways of knowing. And so the things that I couldn't unsee were we had skewed overly on randomized control trials versus other methods and ways of knowing. For example, funding things that we should not yet have funded, there was more groundwork to do. There were alternative methodologies.
There was there were divides between evaluative practice and what was happening in data science. And the data science people weren't talking to the evaluation people, and nobody was talking to the technology people. There were presumptions that we're could just access the data without having to invest in data infrastructure, like the plumbing, and these assumptions that we could just add water in some cases and then know things.
And we were spending a lot of money on these studies and a lot of methods that were not actually delivering real information back to practitioners to support decision making, to support their own learning.
And so the things that you couldn't unsee were this mismatch of methods, these disconnects between data, evidence, technology, and fundamentally this decentering of practitioners, which are shorthand, you know, Sarah became like, practitioners had really been made the caboose of the train. They they should be in the engine. They should be driving this. We have this the other way around. This is not how we support entrepreneurs and leaders in other sectors. So can we flip the script, and can we find alternatives to either close these disconnects or find new ways of doing the work that can get us better results at the end of the day?
Yeah. That's such a brilliant insight, Kelly. And I'm I'm super excited that that was the vision around project evidence that that you and Sarah and others have coalesced around. Like, I agree there is there is so much investment, that doesn't necessarily lead to the outcomes that we're expecting.
And I think oftentimes, we don't really know why it doesn't lead to that. And so to know that your organization is doing this hard work of really understanding, the the investment from a data perspective and really looking for, like, authentic evidence of real outcomes is a very, heartwarming, story to be able to tell and one that I'm excited to hear. So, Sarah, let's hear a little bit about you. So you've been part of this organization for a little while.
As we said at the beginning, you and I bumped into each other at lots of philanthropy events out there. So, tell us a little about, about your background and and what you do with Project Evident.
Sure. So first of all, a little story that Kelly did not share is how we got to know each other. So Kelly and I got to know each other on a project, and, Kelly, this has to be thirty years ago at this point, which was about bringing the Internet to nonprofits. It was it was a whole RFP process to actually create dot orgs, to take offline nonprofits, and to think about how they could be brought online. So I I'd say of our interest in how technology could actually scale what works goes fairly far back.
Don't count all the years, but it goes fairly far back.
My, my career, I've done a number of things. I've been on the for profit investing side. I've been also a grant maker.
I've been on the for profit consulting side. I've been on the nonprofit consulting side. I've worked in management inside of a nonprofit.
In all of those different seats at the table, I've been focused on how to grow things. Like, that that's been, like, the fundamental thing that I'm really excited and interested about is, like, how do you grow things. A big part of my career, the early part of my career was focused on geographic replication.
I mean, that was, you know, that was sort of this idea of venture capital money, which was sort of, you know, new in the headlines, was being used to fund Internet based businesses, and then had a big shift about the middle of my career, which went to transformative impact, which I think will as we all butted up against the ceiling of available capital. Right? When you think about franchise models and how for profit entities scale what works for them or make more money, what works for them, you know, you're thinking about debt and you're thinking about equity and you're thinking about, you know, private investment or, you know, shared you have all these different, I think about, like, you open up the cutlery door drawer, and you you have, like, you have all these different pieces of silverware you can use to try and help that organization grow.
And on the nonprofit side, we have, like, a rusty butter knife. It's like we've got a grant. That's what we've got for you. So I think in terms of geographic replication, we all kind of butted up against the fact that there the capital market and the social sector just did not function the same way the capital markets did in the for profit sector.
And you had to get craftier about how we were gonna think about scaling things, and getting crafty was happening outside the US. And so a lot of really transformative, thoughtful ways to try and scale programs that worked in much more lower resource and more challenging in terms of language difference or geographic, difference, for scale. And so I got really excited about what does transformative impact look like, how do you think about, you know, building just for policy shift. And for me, because a lot of my career had been focused on how to get big, it was pretty radical to change the way I was thinking and say, well, how small could this organization be?
How small could it be and still affect significant change? The work that I got to do with with project Evident, you know, one of the things that so spoke to me about how Kelly felt about the sector and how I felt about the sector is that I think we both just genuinely believe that if you're gonna spend eight hours of your day doing something, you'd like to do it better.
You just as a human being, you would like to do it better. You would like to improve. We kind of believe that human beings are wired to want to improve and do things better. And there was this just very weird narrative, particularly prevalent in the in the funder space, that nonprofits didn't want data.
And to me, that just to both of us, it just felt like that's why why would they not wanna do their jobs? Why why would they be different than me or you? Like, why would they not wanna do their jobs better? So so the idea of how do we bring data to bear to help better decision making and decomplexifying decision making was pretty exciting. And then that coincided for us with AI, which for me in my career is, like, the third turn on how do you scale what works. And I feel really grateful that my career is now spanning, like, the third turn, and to think about, okay, this is an entirely different way to think about scaling what works, how do you scale impact, and it's married with this belief of everybody wants to do their job better. And having information about how you're doing is fundamental to how we're wired as human beings about wanting to do better.
Bringing those two things together is super powerful and, makes the work really joyful.
Yeah. I I love that you sort of touch on this, Axiom, that everybody wants to do better. I I wholeheartedly agree with that. I'm curious from both of you. What do you think is the bigger roadblock to doing better? Is it investment or is it innovation?
We're genetically predisposed to answer with hybrids.
So it's a it's a combination platter because you also can't innovate without investment often, and you can't make good investments without innovation. And, actually, Sam, at some point, this is at the heart of taking an r and d mindset.
This is part of breaking out of the thumbs up, thumbs down. So if we're serious about trying to solve big problems or small problems, there's still roots and routines and pathways that we need to be able to take, to test, to learn, to make those improvements.
And the ability to make those investments of money or time.
You can do a lot on the cheap with technology. We don't we don't have to think about innovation as a really high price tag opportunity, but you do have to have the time, the the energy, the mental energy, the free cognitive load to be able to engage in that work and very often the courage because as Sarah's pointing out, the myths that often abound make it really hard sometimes to be in very honest, open, learning based dialogue with key stakeholders.
So being able to lead in that context with, you know, that innovation, you know, with the innovation sort of investment dyad together, you know, I I think the courage to lead with learning and the courage to lead with impact is really the mark of leadership in our sector right now to be able to be strong enough to say this isn't working the way that I would like it to work, but have confidence in me. I'm gonna work on getting it to better. Will you help me? Will you invest in that? And here's actually how I'm going to do that, open up the black box to show that show that path or show that road map as opposed to, like, hide everything under the bed or under the rug because the funder's coming, or I can't say this to my board because of x, y, and z, or I can't be honest with the parents in my school district because of x, y, and z. That and that holds us back. I'm sure Sarah has a very different take.
I wouldn't say I wouldn't say it's different, but I wanna I wanna use the word courage as a jumping off point because I think I think our field has a very, a very difficult relationship with risk.
So on one hand, for practitioners, you know, they are so concerned with not in any way harming, as they should be, the individuals and the communities they serve. Right? They they have a low risk tolerance, which can dampen your desire to innovate. Right? Even if even if you see that change needs to happen, like, I I have this I have this I have the privilege to be with people when they're at their most vulnerable.
I have to be so careful in terms of my own risk profile that I don't do anything that could create any sort of harm. On the funder side, we also have a very low risk tolerance, which is something that is really I still haven't quite figured out why that's the case. Because in some ways, the capital that's available through philanthropy, which we often talk about as risk capital, really functions as a, you know, almost more adverse to risk than government investment. Right?
It it's it's That's right.
It's a little bit of a myth, isn't it?
Yeah. It's total myth. And so so we have these two, you know, halves of the coin. Right?
We have the practitioners and we have the funders. Both of them are desperate to see extraordinary change happen on behalf of individuals and and communities they serve, you know, within their missions. And both, I think, are really concerned about doing any harm. But the challenge is is that the change that they seek is change that no one has been able to deliver for a hundred years.
I think we're all productively unsatisfied with the change that has been been created on a multitude of issues that that matter for, vulnerable communities.
And so changing our relationship to risk is required to be able to embrace innovation. And without embracing innovation, we're just not gonna get there. I love the fact that technology it we innovation was never something that was expensive. I mean, you could you could use a pen and paper. You could do it it it was not required to do expensive things to innovate.
But I do think maybe using technology and things like AB testing, and synthetic data maybe is a little more expensive way to innovate, but maybe speaks more to the risk profile of both the practitioners and the funders that could get us to a place where we're willing to embrace more innovation in pursuit of shared mission.
Yeah. And and now we're at a really interesting point inflection point, I guess, in the nonprofit sector. Because before this emergence of generative AI, I think that many, many nonprofit organizations would would gladly have said, I can fully take advantage of the tech that exists out there today to do our work better and to generate more authentic outcomes.
But I don't have the funding to do that because the funders are, you know, excluding technology investment from many of the grants that that were being made at the time.
Now over the course of the last couple of years with this emergence of generative AI, I'm just not so sure anymore. It's like now they they still struggle a little bit with the investment side of things, And we have this emerging technology. But and and, Kelly, you'd mentioned that, you know, you and Sarah were around at the beginning of the, sort of the dot com boom. And and as the Internet was was, proliferating, and and we were all adapting and changing our work processes to be able to take advantage, you know, of the Internet. And here we are again some some twenty five years later, and and now we're doing this with artificial intelligence.
And I think it just makes for a really interesting time in the sector where now nonprofit organizations are really looking for the capacity, the innovation, and the investment from from funders to be able to make all of this a reality. And I think that's sort of what you're getting to, funding the future of grant maker strategies and AI investment, document that you guys created, which I went through again today, and I just love it. I think it's fantastic, summary of sort of where we are right now and, and great strategies that funders can be thinking about and implementing to make sure that their nonprofit partners aren't missing the boat, in terms of being able to leverage artificial intelligence to to really sort of reinvent the way that they're, you know, envisioning these interventions and creating all of this, these outcomes. Like, to tell one of you told me a little bit about, like, how this report came to be and, really, like, what your what your goal was in developing it.
Sure. Well, we we became really excited about the possibility of, machine learning. So we'll we'll step back from generative AI machine learning, to drive outcomes.
And we're fortunate enough, as part of the next generation evidence book, which talks about r and d as a real principle, of next generation evidence, to do work with, Pete York from BCT Partners who wrote a chapter in that book, and it was focused on two organizations, one called First Place for Youth and the other called Gemma Services, both of whom were using predictive analytics.
And the exciting thing about the use of predictive analytics is that, it could really move those organizations from a one size fits all program model into really a customized program model, which, you know, had the opportunity to really blow the doors off of, impact for individuals.
So we got really excited about this, and, we started talking about it. And, we heard a variety of messages from the funding community, none of which matched our excitement.
A lot of it was concerns about risk. And so the first concern was, well, if I don't know how to use this, I can't possibly have my grantees use it. So, we were fortunate enough to partner with the technology association of grantmakers and work with several hundred of their members to create a technology adoption framework.
So funders could just begin thinking about how they were gonna adopt AI internally themselves.
And so we said, great. We've answered the question. And we went back to funders and said, now you know how to adopt. Let's really talk about AI and practitioners and data.
And they said, wait. Wait. Wait. Wait. We have a lot of time on this. AI is not coming quickly to this this corner of the world.
You know, practitioners are not using AI at all. And we had the good fortune of partnering with the Stanford Institute for Human Centered AI. We did one of the first AI interest in use surveys for funders and practitioners basically to show that, in fact, practitioners were using AI. They were interested in using more AI.
And in fact, their interest and current use outpaced funders' interest in current use. So then we really felt like, okay. Now now we have it. We we can we can show that they need to get moving.
We showed them, you know, a a framework to help them think about how getting moving. What's the next you know, now let's really talk about practitioners. And what we heard was, well, we don't know how to make these grants yet.
So that's really where funding the future came from was, you know, increasing support for funders around their own thinking about who's doing this well. You know, there are a handful of grantmakers out there that have positive outlier stories around the way that they are, considering, due diligencing, the way they're setting up their own internal capacity, for making grants to organizations that want to use AI themselves. So that's really where funding the future came from.
If I circle back to what you were saying, so many grant making organizations are are in reality, Kelly, very low risk. Like, they they are very reluctant to fund areas that they don't have a great understanding of themselves. So I think that's understandable.
AI has been this emerging sort of technology, at least generative AI over the last couple of years. And I think, admittedly, the data in the report shows that, you know, at least fifty percent of funders, you know, feel like they don't have a good grasp of of artificial intelligence. Sarah, what you're describing is these program officers need to be playing a greater role, but they also have a lack of understanding that it's difficult for them to know how to assess a proposal that incorporates artificial intelligence. So, like, what what is sort of the the, like, best way to to progress all of this? Like, how do we help funders better understand artificial intelligence and get them more comfortable so that, one, they can make better decisions about the the funding that they might provide? And, two, like, how do we compel them to actually begin including funding for artificial intelligence based projects, in the grants that they're making?
Sam, and I just wanna add a couple of things. I I think it's really important to put AI in context of a thing of the things we care about in the in the sector. So Sarah's comments about think of this in the, you know, think of this in the context of mission attainment is really critically important.
And it's not it's not like a demon technology sitting out there waiting to come for us. Like, put put the evolution of these innovation innovative approaches, tools, practices in context of what we're trying to achieve and how it is we want to achieve it ethically, safely, mitigating harm.
And that's part of learning journey, but I think it's also a necessary journey for philanthropy who did for too long sit on, you know, sit on their hands or take a back seat or not understand, like, the the speed and rapidity with which this transformation is taking place.
Interestingly enough, we see a lot of resonance or drive to learn from more social justice oriented funders because they know the consequences of policies of things that are not well understood that start to scale and become part of our everyday practice.
And so in some ways, harnessing, you know, that larger context, we have the benefit of being able to put AI adoption in context of, hey. Look. We've we've already created a data divide. Before AI was here, like, we've created a number of device because of the way we allocate resources, because of who gets money, who doesn't, who gets what technical assistance, who doesn't, how we think about data as an asset and who has rights. We already had these issues.
You care to close these gaps.
You must learn about AI and what, you know, responsible adoption looks like so we don't create an an even worse divide than the one we're already attempting to close. And that that resonates with, I think, a lot of funders.
And, you know, we we sort of have this have had the saying now for a few years that in some ways, particularly generative AI, has become a Trojan horse for good when we go back to talking about infrastructure because it is like, oh, now you wanna talk about foundational data investments or data capacity. Oh, now you wanna talk about technology investments. So it's actually opening up space that has been pretty closed, philanthropically speaking, and not space where philanthropy has historically invested. So there's renewed interest for those funders that are coming in saying, I think I need to learn about this so I can better support my grantees and so I can promote safer, more equitable use and not replicate these gaps that we've created in the past.
So there's a warming context, I would say, for them very tactically, practically, and strategically being able to engage in the work. And I would say there's a set of myths that are getting unseated about who who can actually use this technology and what that's for.
And, you know, Sarah, I would just lateral over to you the work that you're leading, you know, with the team and what you're seeing with community foundations and other funders that are they're diving in. They're not, you know, they're not holding back.
Some of the most exciting work we've been doing right now is working with community foundations and with smaller community based organizations. So, you know, sort of on average, maybe fifteen staff members.
I think the the historical story of how technology enters into the social sector is that it it it hits the elites first. And I define the elites as either practitioners or funders that have high amounts of assets or can provide great, brand halo to technology companies. That's typically how technology enters the sector.
And it takes a long time to filter down to, like, where the bulk of our community lives, which is small community based organizations.
The exciting work that we've been able to do, with the Google dot org philanthropy is they have funded us to work with community foundations across five cities and with a subset of their portfolios and to really be on the ground with small community based organizations and to see what a half day of training can do specifically around fundraising. What can a half day of training do around, you know, around workflows for custom GPTs? So the reality is is that these organizations are excited to use the technology.
They get incredible benefit immediately from using the technology. I mean, we have sat in multiple sessions where somebody will raise their hand and say, well, I've just submitted a grant. In the half an hour that we were just doing a hands on keyboard exercise, I've now submitted a grant.
So I I think the cool thing about general purpose technology is that general purpose, it doesn't have to only be adopted by those who are most sophisticated or have the highest asset bases in terms of their ability to, ability to access the technology.
But you had asked a question about what's the way forward coming out of the funding the future report, which talks about, yes, fifty percent of program officers feel confident in being able to do, ethical impact assessment, but fifty percent don't. You know? And thirty six percent feel confident in their ability to do, technical impact assessment, but the majority don't. What does that mean how the funding community moves forward? And I think the answer is, like, the same way we answer most challenging questions in our sector, which is it's better to move forward together.
And I know it's hard to get Eagles to flock. I appreciate that there's a reason why funders kinda go it alone, that they don't necessarily have to work in partnership.
I I would just say this is one of those once in a generation moments where we are we should tackle this together. Even the infrastructure part, we should tackle it together. We don't have extra time or extra resources for everybody to invent their own training class.
Please fund universal training. We don't have time for everybody to invent their own ethical framework analysis. Please, let's let's agree on ethical framework so we we have a standard that is practical and tactical we're working towards as a field. I just don't think we have the time or the resources in a moment when there's so many demands on our time and resources for everyone to be going it and inventing it alone.
I think we're better together.
Yeah. That's a very juicy idea, Sarah. I completely agree with you and have been a huge advocate over the years that, funders should do a much, much better job of coming together to co fund solutions that benefit large swaths of the nonprofit community.
And in my experience, it rarely happens, especially among, like, the large legacy grant making institutions. And it's not their fault. It's that they have been making grants for forty, fifty plus years, sometimes longer. Every year, the grant making processes become more complex, more bespoke, and it they it just becomes so difficult for them to change their work processes or change their ideology or their investment philosophy to align with many of their peers and work together towards some of these common, impacts. And I'm super curious, like, how do you envision, either you or Kelly, how do you guys envision these large grant making organizations coming together to build this infrastructure for the whole nonprofit community?
I I think that is the multibillion dollar question.
And at some level, I think a reality may be that we have to start on a regional or a domain basis Because at some level, that is how philanthropy and funding flows are already naturally organized, how data is already naturally organized.
And it might also be easier to move and get traction if we can think about our sector in terms of the industries, the challenges, the outcomes, the impacts that we're in in pursuit of. So if we're picking on workforce or we're picking on k twelve education, what whatever the domain might be, our college persistence, there are layers across those activities with a common set of outcomes that I think we will be better able to see more aligned investment to enable the use of the infrastructure that we already have, build out some of the gaps, elevate leadership, provide that capacity building, and quite frankly begin to show what's possible. I think it's really hard to digest ideas of national infrastructure that's going to serve all purposes. Like, if that's where where do we begin.
And I, you know, I I think that type of infrastructure building approach is perhaps more likely to be able more likely to facilitate scale even if it takes longer. But we haven't really gotten it right in the social sector yet in any one particular domain, and I think it's really important that we try to get it right as soon as we possibly can, which will also mean needing to make advancements in context that are more shovel ready than are not. We have a lot of existing infrastructure already through integrated data systems, through existing data sharing agreements, through organizations that already have networks of relationships where there's, you know, shared workflows, shared referrals, where there are funders that are already working more closely with their grantees in pursuit of outcomes.
But I just don't think we're seeing strategy occurring in those spaces, in those industries, or those sub industries to show what's possible when that shared infrastructure is made available.
Yeah. And to me, I think that that really sort of summarizes what what's needed most. It's that there are so many pockets of independent development taking place, and that's all really interesting. There is no universal voice of strategy or strategic AI adoption.
No one has no one has taken that mantle yet. So so, Sarah, if, if Mackenzie Scott and Belinda French Gates and John Palfrey over at the MacArthur Foundation and the McGovern Foundation, if they all come together and decide that they're ready to codevelop this strategy around AI and start building common shared infrastructure for the benefit of the entire nonprofit sector and community. Is is project evident where they should come? Can they can they reach out to you guys to help lead the direction on all of this strategy?
Those are phone calls we will always take them.
Excuse me. As a nonprofit ourselves, we would take all those phone calls. I think there are other, there are other times where our communities have come together forcefully, collaboratively.
And, those were times of immediate crisis, crisis that visited us very swiftly.
I I I don't wanna call AI a crisis, but AI is a is a change that I know it doesn't quite feel like it's happening swiftly because no one's been asked to shelter in place, but it is it is moving very quickly.
And the longer we sit on the sidelines, sit on our hands, or or stay in silos, the more that is going to, hamper. That's gonna that's really gonna hamper our sector's ability and to to respond in ways that are powerful for vulnerable communities. I think there's something else that we have to be careful about as well if we delay and are moving forward. There is a reason why a lot of the work that we do is in the the tax bracket of nonprofit because it's not profitable. You literally can't do it with profit.
That algorithm is changing because of generative AI, because of increasingly, relational AI.
Things that were done person to person are going to change pretty dramatically across the economy, but that also means that there are elements in places where we have people of community, approximate community, creating solutions that could possibly be boxed out by for profits who now say, well, I can actually make money there. I can make money there. And I don't think this is an argument between for profits and nonprofits, but I I I believe that nonprofits I I know that at the end of the day, nonprofits are accountable to their communities. They're not accountable to their shareholders, and that's different. That's just different in terms of the responsibility and what you keep your eye on as as a manager and as an employee.
Yeah. And I if I if I can just jump in here too, we're not also talking about the role of public sector, and the role of the public sector and philanthropy is critically important here. And just, like, also full disclosure, we're part of it at the EdSafe AI Alliance. We were part of, contributors to the, Opportunity at Scale, which was white paper supported by Amazon about the necessary role of public infrastructure for r and d in education as a use case because AI is an arrival technology that affects everything that we're doing.
And we do need government to play a role in enabling access to a set of things that we need in order to, you know, I don't think we know that federally, we're not having or nationally, we're not having any AI regulation anytime soon. That is being left up to states, but we do have a public role and public necessity to facilitate access to datasets that we need for training algorithms for setting public learning agendas, questions that we really do need to answer that affect policy.
And those choices and those conversations are often too disconnected from philanthropy.
So early in a lot of philanthropic investment, we're seeing really cool things occur, like, you know, accelerators or, like, go build this really cool point solution.
But we need to bring we need to now, like, grow up a little bit here and be in common cause if we're really trying to solve a set of things collectively in a state or region or nationally, we do need philanthropy to engage with government at all levels as best it can to promote, sustain, and enable this infrastructure.
It's it philanthropy can't just build the cars. It needs to help build the roads too.
And that that that brings us back to the divide.
Yeah. It it for sure, it does. Like, And it's such an interesting time politically, with the current administration.
And as you said, really sort of a, you know, very intentional lack of oversight when it comes to the development of artificial intelligence. I think in the spirit of we don't wanna fall as a country behind, you know, China and others who are also out there developing.
At the same time, like, I think it puts the nonprofit sector in this really tenuous position because especially from a funder perspective, we we have seen for many years that, like, as we've as we've been saying, funders are really cautious. They're risk averse. They are such an incredible academic bunch. They wanna analyze everything to the nth degree. And so the conversation for the most part in in my circles around artificial intelligence and funding has been around, you know, is it ethical?
Is it safe? Is it fair? Are we potentially causing harm? And I feel like funders have been focused on that angle for a very long time now.
And Sarah, I'll give you the last word. Like I'm hoping we're getting to the point where we're beginning to get answers to all of that and where funders are beginning to develop a level of comfort in terms of how we go about adopting an ethical AI framework so that we have that in place as as guardrails, and then we can get onto the funding here? Like, do you feel like we're getting to that point where we're gonna start, you know, not having to focus as much on the ethics and the fairness and the and the do no harm, and we can move on to funding some, hopefully, incredible transformative solutions that are based on AI.
So based on where this is something that we've raised our hand to facilitate as a mutual broker. And based on current interest, we do think people are moving or are embracing this idea.
I I think the what needs to be created has to be a big tent process. This is not something that gets created by five people in a closet and comes back with something that's perfect, but nobody will ever use. The only way we're gonna get to adoption is if many people have their fingerprints on it and feel like they were able to contribute and kick the tires, and it's relevant to the questions that they face inside of their organization, be that a funder or as a practitioner. I think the other really important thing is that it's practical and tactical. I think we have enough ethical frameworks around AI in the for profit and nonprofit sector, which are essentially the equivalent of do no evil.
It's we don't disagree.
Like, yes, do no evil. But if you're a director of technology, it's pretty hard to apply that in an operating capacity. And so really driving things down to being practical, tactical, applicable to the questions that are before people. So we're hoping that we'll move in that direction, and we have some positive signs that we can, which would leave us in a place where more, hopefully, innovation dollars can flow.
The innovation dollars are important because the for profit sector has had about fifty years to experiment with how to drive profitability with AI. We're at the infant days of how to drive outcomes with AI, very different than profitability. Profitability basically would get you to outputs.
We wanna take it a whole another level to try and use AI to get to outcomes.
And the second benefit here of being able to do the experimentation is that people will play with AI. And to Kelly's earlier point, policy and how AI is gonna be used in positive ways in in communities is really important, and we've gotta have practitioners and funders at the table for that. It doesn't take a long in our own history to remind ourselves that there was nobody at the table for social media, and that does not mean our sector has not been implicated in cleanup around social media. So just because we don't play with AI doesn't mean we will not be on the front lines again, leading the cleanup on whatever effects come from AI. So so important to play. If you play, you have a point of view. If a point of view, you'll lift your voice.
Thank you both so much for an incredible conversation. And, when Mackenzie Scott and Melinda Fritsch Gates, call on you and and want to fund some AI strategy, please keep me in mind. I want to come do that work with you.
Absolutely, Sam. We would welcome you.
Awesome.
Thank you, Sam.
Thank you so much, Sam.
This conversation makes me feel like we're really on the cusp of a big moment for philanthropy.
I think about that time Sarah hearkened back to the 90s and early 2000s when nonprofits were figuring out how to bring their presence online to tap into the power of the Internet. I think the moment we're in mirrors that one. We're past the point of deciding whether or not artificial intelligence will play a role in social impact. It's already here. Now we have to be intentional about building processes that include the voices of those at the heart of this work. And we have to look for opportunities to figure out the best way forward together.
I'm glad we've got people like Sarah and Kelly eliminating the way for us all. That's all from me today. Thanks for tuning in to Impact Audio produced by your friends at Submittable. Until next time.
Season 5 , Episode 2| 36:47 Min
Nick Cain
Sam Caplan
Season 4 , Episode 17| 19:11 Min
Fred Tan
Sam Caplan
Season 4 , Episode 16| 43:48 Min
Woodrow Rosenbaum
Sam Caplan