See full event listing

How AI is Changing the Way Companies Build + Scale — Applying AI to Internal Operations and Workflows

Join us to learn how AI is already changing the way companies build and scale their organizations. We will discuss ways we can leverage AI to drive systematization, eliminate busywork, foster deep + creative work, and encourage collaborative decision-making across the organization. We’ll focus in particular on the impact of AI for product and engineering teams — changes we are already seeing in how we build and develop product and in the internal product and engineering workflows that power teams. We’ll also discuss concrete ways you can start to harness AI in your day to day.

Madeleine Reese is founder and CEO of Allma, a Series-A startup tackling the blizzard of work facing teams every day — freeing teams up to focus on the work they actually want to do. Allma is developing an AI sidekick that sits on top of a company’s entire suite of tools to power teams’ conversations and workflows. Prior to founding Allma, Madeleine was a senior manager at Bridgewater where she was responsible for using data-driven management tools to oversee the systemization of departments. Before Bridgewater, Madeleine was an early employee at Artsy, learning how to build and scale a rapid-growth tech company. She began her career at Goldman Sachs. Madeleine received her M.B.A. from Harvard Business School and her B.A. from Columbia University.

Wenyu Zhang is a co-founder at Our Bot, a low-code product to improve bots and manage their knowledge bases. Previously, he was Head of Product at


Wenyu Zhang 0:07
All right. Yeah. So we’re super excited to be here. We thought we would kind of for this talk to be really helpful and practical about how PMS and engineers are using AI and LLMs in their day to day. But first, some introductions. Madeline’s gonna start.

Madeleine Reese 0:28
Yeah, I’m Madeleine. I’m founder and CEO of Allma, we are a series a startup, helping teams tackle the blizzard of work that’s coming at us each day. So our product enables companies to deploy AI more effectively throughout the organization, our product helps teams do everything from setup to manage and foster collaboration around how teams are using AI to power their day to day work. So we’ve started with a prompt management tool that has real time prompt refining, storing, sharing, and discovery so that teams can easily use AI to run prompts in any of their work tools from slack to the browser, and then very quickly edit the prompts and the output, output and crowdsource problems with your teams, you can run in store sessions across all different types of daily use cases.

Wenyu Zhang 1:20
I’m Wenyu. I am the founder, CEO of Our Bot, and we’re a knowledge management software currently focused on helping companies store all their documentation and standard operating procedures, so that that can be used to train new employees, retrain existing employees, and also be a data set for any parts that they want to make. Before that I was the head of product at a company called Copy AI. And then before that, I was the first girl hired a company called point. All right, so we wanted to kind of start by giving some examples of ways that people are using AI. And we kind of broke these into two separate use cases. So first thing we’ll be covering today is, there’s a whole class of tasks that you can think of as helping with all the mundane busy work of your life at work. And then separately, Madeleine will talk about a whole other type of category, which is more creative, more deep work. Once we cover those, we’ll go over like a framework for how to prompt engineer. And then some limitations to remember as you guys are working with this. And we’ll end with a note on where like this might impact organizations. But the first thing I thought I would do is I would start by playing a quick little video, and there won’t be audio with it, but you guys will get the point. So I think many of you have seen something like this before. But this is just going to be an example of someone using ChatGPT for a task at work. So basically, is taking some documents stored elsewhere. Putting all of that information in a regular text file. And he’s just copy pasting that into ChatGPT. copy pasting, pretty straightforward instruction. And you can see what it’s doing there. So I don’t think this is that surprising, right? I think by now, we’ve all kind of seen what the technology is capable of. But I thought that was a good point, a good entry point into thinking about the types of tasks that AI can help with when it comes to busy work. So I think the simplest way to think about it is when all of the information to accomplish the task exists, like when all of the information is there, when you can put it all in the prompt. When that is the case, you can kind of perform a lot of routine functions on that data. And this is different from cases where you may need to add or find more information which Madeleine will talk about. But in these cases, you can think of several functions. One is extraction. So you can think of this as auto populating forms. Or if you have a lot of help documents, and a user has a question, there’s probably a part of the information in there that’s relevant for the user. So extraction, then there’s also summarization. So this is let’s say you have a large number of user feedback stored somewhere or you have a lot of tickets, and you need to kind of synthesize those. It’s pretty good out of the box at doing that. Of course, meeting notes and action items. You could give it a transcript from like a call and ask it to format it a certain way. So for summarization, it’s useful to think about it as Oh, all the information is still there generally, but now we’re Just kind of condensing it or summarizing it in some way.

Wenyu Zhang 5:03
And the last category I like to think of as translating, or formatting. And you can think of this as kind of just changing the linguistic syntax or the linguistic style in some way. So there’s a lot of products out there, right now you can kind of enter bullet points about your company and your bullet points about your sales prospect, and they will auto write the cold email for you. In that case, crucially, the bullet points already there. So the data is already there, you just kind of changing the how the information is represented. Now another great example is like any kind of natural language to SQL conversion tool. And I don’t know how many of you use a co pilot. But a couple of engineers that I work with have mentioned that sometimes like they’ll just write the comment, which is in the form of natural language, and the code will sometimes be either suggested for them. So yeah, generally, I would think of this as there’s a lot of mundane routine tasks that are highly repetitive, but the information is there already. And that’s something that AI can kind of work pretty good wonders with, similar to like just filling out Mad Libs. And we don’t have time to go over all of the other examples. But I just wanted to quickly show a list if you want to get glance of so many other tasks that fall into this category of automating busy work.

Madeleine Reese 6:29
Yeah, thanks, venue. So I’m going to walk through the other big bucket of work that we’re seeing use with AI and that’s brainstorming and creative deep work. So ideation decision making, learning, and when using AI as an ideation partner, the expectation is augmentation. So leveraging GBT as a partner to enhance the output of creative deep work, but not replace it. So the human brain is still very much involved in creating this work. I’ll walk through some common categories we’re seeing under the umbrella of creative deep work. And the first is around decision making. And what’s great is AI is an objective third party. And so it can be very useful in helping us navigate decisions as a true triangulation partner, you can, you can turn to AI to surface trade offs. So have the technology force a list of pros and cons, you’re gonna have at present counter arguments. So helping you invert the decision and come to it from the other side. If two people are disagreeing GPT can arbitrate and objectively mediate the disagreement, and then also, it can guardrail egos. As humans, we all have egos, it’s nearly impossible to see ourselves objectively. And we can turn to this technology to help surface and identify biases and assumptions and have us be more aware of them and come at these decisions more objectively. And then to give a very real life example, last week, I turned to GPT to help me work through a product roadmap prioritization decision. So one of the things I’m focused on is increasing activation of all my slack product. And I need to solve for deepening usage and engagement. So I turned to GPT to help me decide which of two prior product features to prioritize. And I actually can share my screen and show how how it went.

Madeleine Reese 8:25
So I prompted it around this dilemma and challenge I was facing, and very quickly GPT surfaced and analyze the pros and cons across the two product features I was considering it listed them out very clearly supported some of my thinking reframe some of my thinking it inverted some of my thinking. And it helped me see things really laid out clearly and objectively. And kind of once I got that initial output, I could then refine it. So I could ask it the counter arguments, I could ask it to build on certain ones, the pros and cons, and really leverage it as a triangulation partner. So then the second category that we’re seeing, we can go back to the slides is around brainstorming. So using AI as a companion to create and augment content. So you can brainstorm full blown written documents, you know, from PRDs, to strategy docs, to compiling engineering standards, to running product campaigns and blog posts, and really use GPT to get a first draft built out that you can then take and refine, and it can suggest things you wouldn’t necessarily have thought about. So a new way of approaching a line of code a new idea for a product campaign, a new way of reframing a section of your strategy doc. You can also use it then to troubleshoot. So to debug code as a QA partner to identify security vulnerabilities. You can use it to write content in different products. Hamming languages and what’s cool is GPT. Four is a visual model. So you can even take sketches and wireframes and create full blown websites and designs off of them. And then the final category is using AI as a way of learning about a new subject. So you can use it like StackOverflow, I need to figure out a new API. It’s almost like a master class learning experience. If I need to figure out how to launch a new product successfully, I can have a teach me best practices. If I need to figure out how to design something I can turn to it if I need to learn a new programming language if I need to learn a new language, writ large. And really the possibilities here across these different categories are, are endless.

Madeleine Reese 10:45
There are many more examples of different use cases that we haven’t touched on, and that we don’t have time to but but these are generally speaking, the broad categories of using AI as a creative deep thinking partner. So now we’ll walk through a more a very practical framework that you can use to build and refine prompts using GPT. With your work. It’s worth noting, crafting prompts, keeps thought and effort to build to build a prompt relative to what you’re trying to achieve. So the more that you can build this muscle up, the easier it will become. And generally speaking, the approach is not about nailing specific language nuances, it’s much more about problem solving. So coming at prompt creation and refinement from this perspective of problem solving, and starting by really defining the goal of why you’re using AI in the first place for whatever it is that you’re using at work. And really starting by scoping out what is the core problem for GPT to solve here, and underneath that defining a role and clear set of expectations for GPT. So literally assigning it a role, you can say, you are an AI product manager who is an expert in prioritizing roadmap functionality. And then subsequent to that start shaping the parameters of how you want GPT to respond with output. So you can define the voice and the tone of your desired output, you can define the intended audience who’s going to ultimately receive the output, you can feed GPT, very specific context about the work that you’re doing. And the use case, you can give it stylistic winners, bullet points, word limits, I want this to be an email. And you can feed it examples of past work output. So right off the bat, the output matches your style, your voice and your intended substance. So once you get that initial output, now you’re entering this state of refinement, where you’re going back and forth to GPT, probably editing the prompt to get the output closer to your desired end state. And so in the world of problem solving, again, coming at this exercise from Problem Diagnosis and solution. So starting by identifying based on that original output, where are the different gaps and opportunities to really refine what you’re seeing on the output. If there are multiple places that you want to refine, based on the output, break down each one of those into their own problems. So you might have multiple subsets that you’re then going back and refining with GPT. So for example, if you’re using it to generate a product requirements, stock, maybe you’re breaking down each section of that output, and tackling it one by one across overview risks, core functionality, etc. And then be very specific about what you want change as specific as possible. Another technique during refinement, is working to reframe problems. So to reframe, you can take the audience’s perspective and feed it to GPT. You can use analogy or descriptions to represent the problem in different ways. You can use abstraction. And then you can also play around with different design constraints. So you can add or subtract text limits or context or going out come criteria and other guardrails.

Madeleine Reese 14:15
And if you’re doing summarization, or analysis use cases. So the first bucket that when you talk through, you might want to actually add constraints and really helpfully focus in and narrow in the output. For creative work, you might find yourself wanting to remove constraints and really open the aperture of what’s possible. So zooming out, really coming up prompt building and refinement for work with this problem solving mentality and really leaning into the experimental puzzle puzzle solving, trying out different things, and then keep track of what you’re learning and your insights so that you can compound your ability to use the technology going forward.

Wenyu Zhang 14:57
So there’s obviously a lot that’s power fought with AI and it has a lot of what I consider a well deserved hype right now. But I do think it is important to this wouldn’t be a complete talk if we didn’t address some of the limitations of using catch up and other LLMs. So, hallucination, for those of you that don’t know, it’s just a term of art. That means when large language models when they make up something that’s not correct or doesn’t exist, I think the best example of this, maybe some of you have seen this in the news. But there was a lawyer, who’s a personal injury lawyer representing a client that was suing an airline. And the lawyer made a legal filing, where like, all of the precedent, all the cases that it referenced, were just totally made up. And like that the other side, couldn’t find these cases in the literature. And the judge eventually had to be like, wait, what’s going on here? Eventually, they realized that the lawyer used ChatGPT to generate the Legal Brief. So that’s a pretty good example of what hallucination is. One thing to keep in mind is some people would say that for some use cases, hallucination is a feature, not a bug. So if you’re doing creative work, and if you’re brainstorming, you kind of want it to give you multiple perspectives and ideas. And just because that helps, like you brainstorm along with it. But there are many, many use cases where you would not want it to hallucinate, ever. And some of the most common techniques for preventing a hallucination is one using some type of like classifier that says, Hey, don’t talk about this topic. Or to using the using a retrieval model. So when you first going into like a knowledge base, and say, here’s the information that’s relevant, and then giving that to charge up and say, Hey, use this information. So that way, there isn’t just like a blank space where it’s making things up. Yeah, but again, I would think that right now, most people would say, this is a longtail problem, it’s definitely like, it’s hard to imagine a future right now that we’re this like goes away. But the extent that you can mitigate it and work with it is the extent that you can unlock different use cases. The second kind of limitation is data privacy. And I think there’s two categories of this one is PII like in a lot of industries, you’re dealing with people’s personal information, where you know, that’s heavily regulated, and a lot of companies may not want or may not be allowed to send that information, to external LLMs.

Wenyu Zhang 17:42
The second category is proprietary information. So if you’re a company with a lot of, you know, knowledge and sensitive information, you wouldn’t want leaking out into the world. And you’re trying to use LLMs, for some type of internal use case. That’s also something definitely to consider. And again, I would mention here that a lot of this depends on like the company’s legal policy and where they stand. And there’s a lot of API providers that say, like, hey, we won’t use the we won’t use the data to train our models, if it comes from this source or this API, and you can opt out, and things like that. So definitely, I think is up to the company’s policy. The last thing I’ll note is that we’re relatively early in this age of kind of like new functionality driven by these LLMs. And so we’re seeing a lot of large income incumbents releasing new features and kind of packaging in new functionality into their existing UX workflows. We’re also seeing startups kind of try to tackle on the challenge of rebuilding the workflow from end to end. But for a lot of people right now, kind of the workflows are not really ideal. And now and we’ll talk about this later, as well. But there’s a lot of like, copy pasting, like I, for my daily work, I work between these multiple apps like this store data stored here, I’m copy pasting from ChatGPT into this other app. And so that’s also something that’s also a challenge, right? Because if the whole point of this is to kind of save us time in our day to days, but it requires us to create a lot of workarounds in our workflows, then that may not be produced may not be worth the squeeze. So that’s something else to keep in mind. But I think this is one that definitely could be solved with time of existing companies and upstarts.

Wenyu Zhang 19:46
So, we’ve talked a lot about kind of the use cases for how individuals have used LLMs and how one might do so both. That’d be good too. Talk about whether organizations as a whole go, because I think there’s kind of a second order to this where the way that people as groups work together to change. And there’s two topics that we feel passionate about. In these other areas in which we have our respective startups. Though, the one I’ll tackle is knowledge management, I think that basically, you saw this kind of explosion of these chat with your docs, applications and products and use cases, I think, what we’re realizing is the power of like documents and the power of wikis. When the data is stored somewhere, it is now really easy for us to kind of just take that data and use it in other contexts. I think that means two things, I think that, you know, documents, wikis, wherever information is stored, there’s going to be much more leverage on that you can kind of turn that into a chatbot, or use that to power other AI applications. And the second part of that, I think, is, we’ll see more and more people actually need to manage their internal documentation or external documentation more carefully, like, thinking of it as, oh, this is the data set that powers some AI functionality downstream. There’s also another whole area where, you know, these transformers have really enabled better search, as well. So I don’t know how often you, you folks that work. And it’s like, Hey, I can’t find that report from three weeks ago, or, you know, a new hires like way, which what’s the HR software that to do this thing that I need to do? There’s a lot of disparate information and a lot of silos in the workplace. And actually, kind of AI powered search, where you just have one magic bar, and you can kind of get whatever information is relevant for your work. And if you imagine that for a whole organization, that’s also something that’s really exciting. And the last thing I’ll say is that, if you think about church should be one of the biggest applications of it is actually education, like students, and probably have plagiarism as well. But it kind of shows that there’s this kind of natural ability to use it for like working with new knowledge, and education. I think that for business, it’s only a matter of time where we see that adopted more and more where, if you think about all the cases where education has to occur with a business within a business, you have a new hire that you need to onboard, or you need to properly communicate something across different teams. I think you’re also going to see these LLMs used more and more for training and education within businesses as well. Madeline

Madeleine Reese 23:01
sweet, thanks. Um, I’ll tackle the workflow piece in terms of how we can use AI effectively across the organization. So as when you mentioned a few limitations right now include the fact that AI at work is quite siloed. It’s very individually focused, I could be using this technology and have no idea that others on my team are using it. And it’s also pretty disjointed from our actual way of working. So the real power that I foresee in terms of AI at work is when we can use this technology more collaboratively and more integrated with our day to day workflows and alongside our team. And there are a few key elements, I see them moving to this world that I’ll touch on here in terms of what we can expect to see going forward, and then how we’re thinking about some of these things at Allma. So the first is around better embedding. So really being able to use AI seamlessly at work. And to achieve this, the technology needs to work with your existing workflow and across your many different size applications that you’re using throughout the day. So we can expect to see much more integrations we can across to the different tools we can see, you know, legacy companies building AI into their products, as well as new AI companies building deep AI workflows end to end for certain use cases. And then alongside this just a better way to run and refine prompts at work.

Madeleine Reese 24:26
So GPT doesn’t have a great editor interface today. So expect to see different ways of being able to more easily run and then edit prompts and outputs in your workflows so that you can make changes easily and see how they’re, they’re changing things. And then once you’ve nailed that prompt and output, better ways to actually improve and store your prompts so that if you’re going to keep using them repetitively at work, you have an easy way to access what you’ve already learned and figured out and then second, being able to use AI collaboratively across the organization, so using the technology and being able to crowdsource what you’re learning across teams and harness what others are learning as well. So think being able to share and discover different prompts different use cases, different ways people have implemented different AI technologies across teams on the ability to let us all learn from each other much faster and really build this collective muscle, I’m using this technology across the company, this will also allow companies to establish more consistent practices and philosophies for how they want to deploy AI at work today. Today, it’s very wild west, it’s kind of everyone doing their own thing. And so being able to actually harness those practices and philosophies and develop their, their way of working as a company, and then collaboration. So a world where you can actually collaborate on work using GPT. So if we’re building engineering standards and best practices as a team, we’d be able to work together simultaneously in the same prompts in the same output in the same Doc’s using the technology. And then finally, this notion of developing more context, retention and memory through time for how we’re working and how we’re using AI to do so.

Madeleine Reese 26:18
So applying the concept of agents with goals and memory to work. And imagine a world where you know, each time you go in, you pick up the you have an are able to pick up the last session, and it has all the context all the prompts, the output, etc, kind of these different experts that are able to guide you through the different elements of your workday. So my API expert, my engineering standards expert, my Pierre de expert, in a way that remembers prior queries, lets me refresh memory explains things in context, and is also trained up on the company’s data and my way of working so it’s quite personalized to me and my team. And you could see through time, too, with functional calling, these experts actually being able to do tasks for work across tools on our behalf, um, was there just a few of the different ways I can actually show a little preview of how we’ve started to bring some of these elements together with all so I have up on my screen, so you can switch over. So first off, just being able to run prompts across different work tools, whether that’s conversations in Slack, or content in the browser, or any of your web based tools, being able to highlight information data context, and then run it through a prompt. And then once you get that output being again, very easily able to edit the prompts, see how it’s changing the output, and play with different parameters from default tone to audience reading, and examples, reading and references of different workout artifacts in context of your work. And then ultimately, being able to also version control and save these prompts as you’re making them better and better, which means that each session and being able to crowdsource them with your team, so then everyone can discover what other people are learning through using the tool and the technology. On so lots of exciting things on the on the front. And all by itself, we’re still early on in the product. So we have a lot to learn and discover through building alongside our early users, and also being a part of the exciting developments that we’ll see with this technology and applications of AI across an organization for day to day work.

Wenyu Zhang 28:47
Alright, let me see, are we back on my screen? Yeah, we are great. Cool. So that’s pretty much our presentation to kind of put it in the package of some takeaways. One, there’s this category of use cases, which is about reducing the busy work in our lives and kind of taking care of simple information related tasks. And this works really well out of the box with MLMs. But when kind of the data is there, right? When you have a transcript, and you need to summarize it, or when you have something that needs to be format, then there’s using AI as a companion for higher level tasks. This is kind of using it as a brainstorming partner using it to help you think through something but you’re still the person that’s kind of being augmented and thinking better. And when you’re working with problems, really it’s about problem solving, and iterating. With that to kind of almost actually is kind of interesting the same way how you would coach a person to ask the thing about the problem in a different way. And there definitely are still limitations to keep in mind when you’re applying it to different use cases that include hallucination data privacy and also workflows. But overall, we’re pretty excited for what this means for our work and where organizations can go. So that’s it. That’s our presentation. Thank you all very much. I think we have time for some questions. Um, one quick. Ask is, you know, we’re both early stage founders. So any help that we can get is really appreciated. So we’re looking for folks who are interested in participating in user research, or getting early access for either of our products. So there’s a link in the chat. It’s a simple Google form. And if you’re interested in talking with Madeline, or I later for user research, Early Access, just fill out the form that’s in that link. Thanks. Thank you.

Erin Mikail Staples 30:58
Oh, it would help. If I unmute myself, this is what happens. I was so excited about it. I’m like, Madeleine, I’ve got I’m so excited about your tool. I’ve got it bookmarked. And I went and got myself inverted. Um, that’s what happens. But I’m very excited about the tool you’ve got. Thank you so much for the presentation. We’ve got a few questions here from the audience. And so first off, Jimmy Reikly. Again, I’ve only messed with GPT as a normal end user and haven’t saved or reuse prompts across time, I’ve got a lot of focus on that being a primary use case in the business applications. How stable has the platform been for that? And how often do safe prompts or prompt strategies need to be redefined?

Madeleine Reese 31:45
Sure, I can take this one. Um, so I think if you if you identify that you have a use case that you’re probably going to do more than once for your work, I think it’s worth investing some time upfront, and actually building that prompt and then saving it somewhere, it doesn’t have to be fancy until the almost products out, it can literally be a Notes app, but just having it somewhere, so it’s easy for you to reference and you’re able to pick up on on the work that you’ve already done in terms of approaching building that prompt. So I would say again, nail down kind of the goal of it, the use case will roll and your basic parameters. And I actually then I wouldn’t expect those to change each time you use that prompt all that much. So if you’ve defined a prompt for, you know, writing a certain flavor of customer email, and you have your voice defined, you have the role defined, you have the intended audience to find in that context. And that formatting, you’ll probably keep those pretty standard each session. And then what you’re changing when you run that prompt is really the specific details of that customer communication. So you might be feeding in a specific reference, maybe there’s a news article about your customer that came out that you want it to reference in the in the communication, maybe there’s a specific example or specific detail about a product launch that you want it to reference. And that will be the tweaking and the refining you’ll be doing for that use case.

Erin Mikail Staples 33:16
Awesome. Thank you so much for that. And I think we’ve got a great follow up question. It’s like y’all in the chat here, are coordinating across different time zones and space and all that good stuff. But Matt asks, How often do you see a prompt drift or the cases in which a prompt used to work but then it stops working? And you’re like, Crap need to go dig that up again?

Wenyu Zhang 33:39
Yeah, I can take that. I think that organizations that I talked to that use LMS for some production feature for them, ongoing testing is pretty much like standard. So if, if you’re using it, you know, if it’s like, if it’s like you’re using it for personal use case, every now and then like obviously when you notice you’ll just like work with the prompt and fix it then but um, yeah, drift does happen. And the solution to that what a lot a lot of companies do is just have test cases that they make sure to maintain over time.

More Awesome Sessions