transcript

Matthew Oatts

Nice. All right. Is it Fran or Fran, by the way?

Fran Abenza

Whatever. Like I get.

šŸŒ“ Brian

All right.

Matthew Oatts

So, Ron, I know that youā€™ve watched some of the recordings maybe, but this is the first time youā€™ve actually joined live.

Fran Abenza

Thatā€™s correct, yeah.

Matthew Oatts

Cool. So I tend to start with. And I know Z, by the way. Zia is my. Brian in the last meeting mentioned I have a new intern.

Matthew Oatts

Zia is my intern, so.

šŸŒ“ Brian

Oh, thanks for joining, Zia. Thanks for the invitation.

Matthew Oatts

Learn more about Smartest Connection.

šŸŒ“ Brian

Awesome.

Matthew Oatts

So I. Letā€™s see. I want to share my screen here. What is today? The 15th. Cool. Evanā€™s joining. Hey, Evan.

šŸŒ“ Brian

Hey. Youā€™re on mute, but yeah, weā€™re concluding that youā€™re saying something.

Matthew Oatts

Yes.

Evan Moscoso

Sorry, Iā€™m finishing up lunch here. Iā€™ll be done soon. Iā€™ll be back in front of my computer.

Evan Moscoso

Good to see you guys.

Matthew Oatts

Good to see you.

Evan Moscoso

Can you guys hear like ambient noise or am I good?

Matthew Oatts

No, you sound fine. Hopeful.

Evan Moscoso

Awesome.

šŸŒ“ Brian

Yeah. And in case you guys donā€™t know if youā€™re familiar with the Smart Visualizer plugin. So basically itā€™s like itā€™s. It takes Smart Connections and shows you that graph view that was Evans doing.

Fran Abenza

Okay. Like the one that utilizes the canvas.

šŸŒ“ Brian

Yes.

šŸŒ“ Brian

Well, yeah, itā€™s the graph view thatā€™s similar to like Obsidianā€™s graph view, but it uses instead of links between notes, it uses Smart Connections to make the connections.

Fran Abenza

Right, right, right, right. But itā€™s like. Yeah, yeah, I saw it. Itā€™s kind of a clever way to kind of manipulate in case that you want to delete some messages and start a workflow midway. Right. For example. I mean, thatā€™s one of the uses that I have, you know.

šŸŒ“ Brian

So really I see a lot of opportunity with the Smart visualizer. So, like, thereā€™s stuff that Iā€™m working on thatā€™s more back end stuff that Iā€™m really excited about, like getting that finished up so that Evan can put the visualization component, you know, for example, like clustering, you know. So like right now the Smart Visualizer is essentially limited to the current note. Sort of like, you know, Smart Connections shows you relative to your current note, but. In the hopefully near future, weā€™ll be able to change that into a full vault, like the entire vault view of a graph using clusters and maybe in addition to clusters showing folders as the central node. And I think that thatā€™s going to be really cool for finding ways to better organize your vault, you know, or maybe like selecting, you know, things that are similar, like for use as context. Like, I think thereā€™s a lot of opportunity there, right?

Evan Moscoso

I mean, thatā€™s the plan. We have the base right now, but eventually, like we envision just like being able to manipulate any of your nodes or anything like that, just through the visualization where you can click on different nodes and be like, all right, combine these or create an idea out of these noise, these nodes that Iā€™ve selected. You know, things like that, where itā€™ll be way easier than just having a list or, you know, anything like that. So, like, thatā€™s the plan is like, eventually youā€™ll. Because, like. Right, because honestly, visualizations, like, is the future, like in terms of like how to communicate and how to understand and how to manipulate data is through visualizations. If you just do text, that is something thatā€™s cool and it works, but itā€™s very limited with what you can do and what you can perceive or whatā€™s happening. So with this visualization feature, like, thatā€™s what weā€™re hoping will kind of bridge the gap between human experience and like our notes. Because the thing is like with text and audio and everything. Iā€™m sorry, Iā€™m going to finish it real quick, is that, you know, we live in a three dimensional world and weā€™re communicating with audio and text, which only like, describes a three dimensional world. But if you had like 2D, like visualizations and everything like that, thatā€™s way more descriptive than just using text and audio. You should have like a way better way of interacting with your thoughts, manipulating them or updating things. So thatā€™s the plan for the future of Smart Connections. And I literally just moved this week, which is why Iā€™ve also been kind of delayed on updating things. But I plan to go full in with Brian. Weā€™ve been talking about even doing some like, streaming of coding sessions for Smart Connections, things like that.

Evan Moscoso

Yeah, okay, yeah, no, yeah, weā€™ll invite you, but like we talked about. Yeah, just like streaming our coding sessions as weā€™re developing new features or as weā€™re changing things. Like weā€™ll have the chat open and if anyone wants to talk, weā€™ll talk with them as weā€™re, you know, updating things. But that is a plan for the near future because I literally just move. Iā€™m getting my studio set up and weā€™ll be doing that.

šŸŒ“ Brian

Yeah, Iā€™m looking forward to that one. Cool.

Matthew Oatts

All right. Thanks for joining Jason. Yeah, thanks for joining Jason. And we got a kind of nice little quorum here. So Iā€™m going to go ahead and start sharing the screen again. Weā€™ll hop into our usual rhythm of lean coffee style meetings. And I apologize, I am just on my laptop because Iā€™m feeling under the weather, so Iā€™m not with my multiple desktop things. So right out of the gate, as I share my screen, weā€™re going to have the infinity mirror effect. And then real quick, Iā€™m heading back, Iā€™m heading

Evan Moscoso

back to my, my computer real quick. So Iā€™m at this restaurant, but Iā€™m like, like two blocks away. Iā€™m just going to head there and Iā€™ll be on my computer. So just FYI, Iā€™ll be right back.

Matthew Oatts

Okay.

šŸŒ“ Brian

And Jason, there should be a link. Is the link in the chat for the Miro board? Does somebody already put it in there or should. Iā€™ll drop it in if it. I consider.

Matthew Oatts

I got it right now.

Matthew Oatts

Okay, so hereā€™s the link to the Miro board. Oh, you have a better link than I do, Brian. Or at least I just copied from the browser.

šŸŒ“ Brian

I donā€™t know.

Matthew Oatts

Thatā€™s probably smart to do. All right, so I always do this little preamble in case people arenā€™t familiar with the concept of a lean coffee. But a lean coffee is just a structured way to have a relatively unstructured time together. Lean coffees. Essentially, weā€™re going to use sticky notes here to capture and prioritize topics. Probably not going to take eight minutes, weā€™ll probably do that at five. You essentially take your, we all hop in, grab a sticky, put a topic that we might discuss and then weā€™ll put them over here. And then from there Iā€™ll use the voting features of Miro to be able to have us all vote. Weā€™ll put like, everybody will get three votes and then weā€™ll stack, rank the topics we want to talk about. And then for the rest of the time, what we do is we essentially go top down through the prioritization list and, you know, sort of try to time box the topics. So we donā€™t. This, this community doesnā€™t particularly like, have a hard and fast rule, but what that does is it gives us a way to have like a enough coverage of different types of topics that people are interested in without having like too much overhead and structure on how the meeting runs. And as I mentioned before, we do have a friendly bot thatā€™s joined to record what we chat about. One of the reasons we use that bot, I actually use Cool Grain to do that. What it does is it actually gives us a spit out of already clipped, you know, AI driven topics that people can go back and look at later that I will post in the fun little green box on the right. So any questions about that before we hop in? Sweet. Starting the timer for five minutes. Iā€™m going to stop sharing my screen. Iā€™m actually probably going to stop sharing my video too, so that I can get a little bit of bite to eat, but yeah, five minutes here, add your. Just grab a sticky. Add a topic, and then you can either leave it there and Iā€™ll drag it over, or you can drag it over there. Okay, great.

šŸŒ“ Brian

Thanks, Matthew.

šŸŒ“ Brian

I like to see that. You brought your coffee, Jason. Perfect.

Jason Bates

Well, itā€™s a little later in the uk, so itā€™s a T, but yeah.

šŸŒ“ Brian

Okay.

Jason Bates

But I donā€™t think lean tea sounds as good as the lean coffee.

šŸŒ“ Brian

Not quite as good. What. What do you got, Fran?

Fran Abenza

Honestly, my. My roommate kind of just got me a bit of vodka and cola.

šŸŒ“ Brian

Oh, yeah. So, I mean, lean vodka or lean cola? That. Thatā€™s a little. Little something. Yeah.

Fran Abenza

Thatā€™s not that lean.

Evan Moscoso

Iā€™d say definitely put some pep in your step.

šŸŒ“ Brian

We could have like.

Matthew Oatts

We could have like a holiday party or something, and that can.

Evan Moscoso

True. It could be a lean.

Evan Moscoso

Lean vodka, lean whatever.

Matthew Oatts

Lean eggnog, lean E. I donā€™t know about all that. All right, it looks like we donā€™t have. We have a good bit of topics here, and I donā€™t see other people adding it, so we could with maybe moving forward here and prioritizing. Does that sound good, everybody? Cool. Okay. All right. Iā€™ll share my screen again and let me get the voting squared away. So you should see. Iā€™m going to stop this when I start this voting. You could see just the sticky notes. Voting. Donā€™t need these. Oh man, itā€™s a silly interface. All right, since thereā€™s six topics, Iā€™m just going to do two votes a piece. So pick two things of these six topics you want to do. Youā€™ll see like a little prompt where you see join voting. So go ahead and join voting and you just select the two that you would be interested in us talk about. If for whatever reason you havenā€™t been able to join the Miro board, you can always just put your votes in the chat and Iā€™ll go ahead and make sure we factor those in. Iā€™m going to stop sharing so that I donā€™t overtly influence the votes. Us Americans, we always have voting and all that stuff. Totally squared away. So Iā€™m doing it straight shooting.

Jason Bates

Surely no election talk on this, this call, please.

Matthew Oatts

Dear Lord, no.

Matthew Oatts

Iā€™m actually living with my in laws right now. So thereā€™s three generations in this house, weā€™re buying this house from them. And so itā€™s a temporary period of time and needless to say, Iā€™m very glad the election is over.

šŸŒ“ Brian

Oh man, thatā€™s in laws are the best when it comes to political conversations.

Matthew Oatts

Oh yeah, thatā€™s great.

šŸŒ“ Brian

All right, I think I was the last holdout on the the voting. Are we.

Matthew Oatts

I think weā€™re done here. Iā€™ll share my screen again. Weā€™ll take a look at the results. Okay, cool.

Matthew Oatts

We got desired workflows so this would be a pretty good thing here. So letā€™s do this one just doing this and then this and weā€™ll move these guys over here. We might get to those if we have time but there was a front and center winner here. So letā€™s start with the desired workflows. Who put this topic if you want to do like a little intro to kick off the conversation and then weā€™ll.

šŸŒ“ Brian

So that, that one was me and you know, like my intention is really just to learn from you guys like, like what are the workflows that you know, you imagine are possible that youā€™re not quite able to do yet. And you know, Iā€™m seeing this other note down here. I think, you know, I think Iā€™m just going to pull this one over here if thatā€™s okay because I think this starts to, you know, fall in to the desired workflows thing. So if, if Iā€™m correct, whoever put the key use cases and exploration, research and synthesis, you know, once. To start out like what their desired workflow might look like and then give the rest of people some time to think about the workflows. So Iā€™m going to leave it there if anybody wants to chime in.

Jason Bates

Yeah, so I put the key workflows. Iā€™m Jason, tech entrepreneur in the uk, built digital banks. Monzo and Starling, two of the unicorns here, head of product, led a lot of stuff there, built banks around the world. So I come from a kind of tech background. I but Iā€™m also writing a book so using Obsidian and it was less my workflow than I guess trying to look at the major workflows in that kind of finding knowledge or being exposed to knowledge, putting, exploring that. Then obviously thereā€™s a consumption of it and a collection into some kind of inbox. Thereā€™s an organization of that, thereā€™s a synthesis of that. Then thereā€™s certain things that come out of that whether theyā€™re articles or essays or books or you know, artifacts and I guess publication of that afterwards. But I know that thatā€™s only one use of Obsidian and that people use it for a variety of things, everything from running Dungeons and Dragons games to personal management. So I didnā€™t want to get too much into my specific use case but try to enlarge it I guess to a large enough set of major areas that include most people. So I guess Iā€™ll leave it there.

šŸŒ“ Brian

Yeah, thatā€™s interesting. Iā€™m trying to think if I have a good follow up question anybody, anybody else want to chime in?

Matthew Oatts

I just want to make sure that sort of I because I love this, this topic by the way, if folks have watched this, depending upon the topic and my understanding of the topic I sometimes will on screen just take some notes so that people can pick off of something concrete. So I one of the reasons Iā€™m also interested in this topic is Iā€™ve always viewed Obsidian as Thereā€™s like this and Iā€™ll sort of like dude, like thereā€™s the intake collection synthesis of stuff like the organization. I think this also gets into like sort of knowledge management strategies and things like that. And then like this sort of green thing is where Iā€™ve always seen like this green space is where Iā€™ve always seen like this wonderful beauty in the Obsidian community around when I think of workflow itā€™s like this is the sweet spot of where thereā€™s so much opportunity to either do Smart Connections or to apply Smart Connections to our own usage of Obsidian. And then sometimes thereā€™s the known concrete outputs, like books or reports or whatever, but. When I was hearing Jason sort of describe that, itā€™s like this middle area is where Iā€™d love to pull more threads with this crew. I donā€™t know if that resonated with anybody, but thatā€™s sort of where my headā€™s at.

šŸŒ“ Brian

Yeah, I think that just from my perspective and what Iā€™ve been building, I imagine outcomes could almost be templated. And something Iā€™ve been thinking a lot about is imagine we had a template that had maybe really any amount, but letā€™s just say a half of a dozen prompts inside of a template. Then the AI goes through a process of iteration from retrieval to outputting based on those prompts and then the refeeding, the progress back in to the next iteration. And you know, like a word that I think is starting to get a little popular like around this, like idea is generating reports, you know. So like right now itā€™s like very much question answer, but, you know, if you can come up with like a report on what it is that you want, you know, that that already takes it like a quite a bit farther than just the typical, you know, question answer format, you know. And then I imagine, you know, from, from there, you know, you can start building in ways for the AI to actually make action or like complete actions based on that report.

Jason Bates

Yeah, I mean, from my, I guess, personal workflow, Smart Connect, and you know, your series of tools is interesting in almost all of those areas because when it comes to finding new books or new things or new articles to read that might fit within a particular theme, I can jump into ChatGPT and away I go. And when I start to collect those things, I can actually ask about my existing notes and what that might connect into. And then the sort of synthesis around something will grab me. And then I tend to use sort of text expander or something that actually almost have a dialogue with the AI around some of these themes before thinking, well, this essay or this chapter is, is really about this. And then, well, how could I break that down? How could I structure it? And then Iā€™m using, you know, more generic ChatGPT to sort of get into that. But obviously with the embeddings and being able to find particular articles, you know, away we go. So itā€™s, itā€™s. I agree with that. And I also, Iā€™m reminded of, is it fabric, you know, the command line tool that actually has a series of templated prompts from. Extract wisdom from this, pull out the key ideas, what are the quotes. And so itā€™s a little similar to your templating idea, but with community sourced. Quite significant templates that are on GitHub. So you can actually pull out that text from GitHub of all of the various prompts that people have used. Yeah.

šŸŒ“ Brian

I just made a note under the action item section because Iā€™m not familiar with fabric, so Iā€™m going to make sure I check that out.

Matthew Oatts

Can you say a few words again?

Fran Abenza

What is fabric? Very quickly.

Matthew Oatts

Oh, weā€™re going to go explore it. Letā€™s just take a look at it.

Jason Bates

It was a. It was a tool by a guy who realized that he liked to do this. Itā€™ll be fabric. Thatā€™s probably the one. This.

Matthew Oatts

This thing.

Jason Bates

Yeah. So itā€™s basically command line where you could throw a YouTube. You could throw a YouTube transcript in and then ask it specifically to extract the wisdom from it. And you can. You essentially use some small commands where he wanted to bring together different open source prompts in order to use on this. So if you go to. If you go up to the. To the patterns directory in the. In the file list. Yeah. And so you can see the list. If you go down to extract wisdom. Because I quite like that one in the main. Yeah. Any. Anywhere. Keep going w. Extract wisdom. So you. So the idea what you can click on the system md. You see, basically this is his prompt that then automatically gets fed in. Heā€™s put them on GitHub so people can actually submit new ones and they can be used directly. Rather than embedding it into the tool, it actually pulls the file from GitHub to use. You can see what it does. Itā€™s actually quite a nice prompt. It gives 15 of these things, 10 of these things and away you go.

šŸŒ“ Brian

Yeah, thatā€™s great.

Fran Abenza

One of the things that would be wonderful. Right. For several reasons. Also in the. On the quick. How does Obsidian call it now? Quick. I forgot. But pretty much I would love to have something similar to Fabric, but that look into what I already have on my Obsidian and kind of only add whatā€™s, you know, whatā€™s new or.

šŸŒ“ Brian

Yeah. So you know this like having that open source like prompt library. You know, I think like a prompt libraryā€™s been brought up on these lean coffees in the past and. You know, like, so I think itā€™d be great if I could find a way to integrate with that fabric project, you know, so maybe you could easily pull these things in. You know, a lot of, like, what I do, like, Iā€™m trying to, like, enable contributions, you know, like, weā€™re still really early, you know, in, like, what I like the vision of the whole project. So, like, I really havenā€™t been focused on trying to pull in contributions as much because, you know, as you guys all probably, probably know, things are changing and breaking so fast, you know, Iā€™d hate to be soliciting contributors and then just breaking their stuff all the time, you know, but, like, one of those vectors for contributions is definitely prompts, you know, so, like, there. While there are some prompts that are, like, built into the tools, thatā€™s really not my intention. Like, my intention is to have all of those prompts customizable, even if there is, like a default, you know, like, built in. You know, like, basically anywhere there is a prompt, I donā€™t see why there shouldnā€™t be a way for somebody to go in and edit that prompt.

Jason Bates

Well, if you want to, for instance, create a map of content or something around a particular folder, maybe itā€™s not integrating to Fabric, but just creating a place on GitHub where we can put prompts about the different activity that you would undertake in a vault, whether itā€™s, you know, find the 10 closest notes, write me a summary, create that, you know, create that note for me, or I want an index page for this folder. You know, what headings could I do? And Iā€™m sure there are a sort of a finite number of very common tasks that we could iterate prompts around.

šŸŒ“ Brian

Agreed.

Evan Moscoso

Further. Further expanding upon that. So I look at the prompt and everything that you guys showed. So one of the things that I think is we should be cognizant about is the context window of the AI, because you put the prompt. But the prompt you showed seemed like there was a lot of text in there. And so one of the things that Brian has been working on or that Iā€™ve seen him work on is like F. What is it? FO notation. What is that?

šŸŒ“ Brian

Oh, I remember what youā€™re talking about.

Evan Moscoso

Yeah, off the top of my head. Essentially, itā€™s a way to heavily reduce whatever it is, the natural language text you have until, like, F implies P and whatever. Like that. And so, like, we could even have something where if you have a super long prompt, we have a conversion, weā€™ll be like, all right, weā€™ll just transform all this natural language into, like, a shorter prompt to give you more context. Because, like, itā€™s only as good as, like, how much context you kind of give it. And if, if you give too much, then youā€™re, youā€™re, you have less in terms of how much you can output. And so I think that would be a big benefit when you have people who have all these big details along text. France, pile that into that notation so you have a lot more room left for the output to give better results.

šŸŒ“ Brian

Just a thought.

šŸŒ“ Brian

And I remembered what it was. It was F Logic. So I was messing around. Thereā€™s this. Like existing syntax called flog and I was just, you know, kind of creating a derivative, you know, which is one of the great things about the LLMs is like you donā€™t have to have a syntactically accurate representation in order to get the LLM to understand whatā€™s going on. And you know, like I know like thereā€™s a lot of talk about you know, like these million contexts or even like a billion context windows. The issue is though with those really large context windows and why I believe Evan is right and before heā€™s saying is you do want to limit the context is because the future I see is LLMs. Imagine itā€™s just continuously processing in the background. In order to do that youā€™re going to need local models. As much as Iā€™m very optimistic about local models, I donā€™t know for sure that they are going to be able to handle million plus context windows like especially for the average user. And on top of that youā€™ll be limited to like letā€™s say you are able to do a million context window. What ends up becoming more valuable? So if it takes 10 seconds to run that prompt with a million tokens, you know whatā€™s more valuable? Having that one output or having 10 or maybe 100 outputs of a smaller context window on a smaller model, you know, to. With the goal of going to the same outcome. I know, I think we got a little, you know, off off the desired workflows. So like maybe like letā€™s maybe if anybody else. Does anybody else have any desired workflows that they, you know, like, you know, they can imagine working but like, you know, the tools just arenā€™t there yet.

Fran Abenza

I mean in general being able to control like create our own multi agent frameworks or whatever like that flexibility would be amazing. But I think that that taps also into what kind of frameworks like pretty much being able to control what when we do the Smart Chat, right? Rather than just having your like vanilla OpenAI answering, I would like to have my own autogen flow, right? So create our. Without the need of creating our own API endpoint or stuff like that, just easier make it easier for us to hook it up to our own GPT. For example, letā€™s use this example. I think thatā€™s kind of something that everyone can relate to. I want to be answered by my own GPT.

šŸŒ“ Brian

Yeah, you know, definitely on the short term roadmap. So I am midway through a rewrite of the Smart Chat and my intention is to make it much easier to at least utilize the popular local model platforms like Olama. You know, like right now you need to do some custom configuration, but I donā€™t see why there shouldnā€™t be, you know, some presets in there. And then of course, clarifying how you would go about that, when youā€™re saying no API, are you talking about maybe having a way to integrate with an LLM thatā€™s running through the command line, among others?

šŸŒ“ Brian

Yeah, okay, exactly. I definitely donā€™t see why that wouldnā€™t be possible because you can basically wrap a Command line with JavaScript and execute commands. Itā€™s just an integration like there.

Fran Abenza

Okay, and anybody else, one second, Brian, before we move away, letā€™s say that I want to have my own custom GPT, my own assistance API to manage my Smart Chat. Right. How do I do that?

šŸŒ“ Brian

So, like, currently there is a custom local model section that should be listed with OpenAI, Anthropic and the Google models, and then there are some configuration options based on a lot of peopleā€™s issues on the GitHub. I am not sure whether or not something has broken and whether or not that integration is working well, but Iā€™ve decided instead of trying to dig into that legacy code, you know, any like, time and energy I have, Iā€™m going to be putting into just getting that new version out, which would make, you know, that whole, you know, like itā€™s, you know, achieving the same thing, but it would also make, you know, figuring out what the issues are much easier because, you know, the, the current chat just doesnā€™t follow the same patterns is what Iā€™m working with now. And that, that makes things difficult to get into. So, you know, there should be a way, like if youā€™re in the Smart Chat in the top right corner, the gear icon, you know, there is a custom local model option in there that, you know, may or may not work currently, but thatā€™s how you would go about configuring a local model, you know, and itā€™d be through a local host API.

Matthew Oatts

Okay, so I know one of the workflow, Iā€™ve talked about this multiple times. We have to keep repeating it every single time you have a Smart Connections Lean coffee. But like, the primary workflow that I am using with Obsidian, quite frankly, Zia on the call is also helping me with this is pretty much everything in gold here. If I can take the level of effort of things that I encounter in the world. Be it a blog post or an email that I get or a link that I find, you know, Iā€™m scrolling, you know, if I see something interesting on LinkedIn, whatā€™s the lowest effort way to capture that information and also the source reference to that information so I can have it properly credited, you know, down the road and then begin to get all of that in Obsidian so that I can at least, I wouldnā€™t say future proof, but without even worrying about what Iā€™m going to do with it, start to have lowest barrier, highest effectiveness ways to start harvesting knowledge and information that I encounter in the world thatā€™s relevant to me so that I can then use that in, you know, the next wave of AI agents or, you know, as the cost and solutions tied to sort of like rag and all of that stuff comes down. I have essentially a personal vault of information thatā€™s within. Not someone, some other companyā€™s walled garden, but my own garden that I could potentially, you know, leverage my. So the very first named Coffee that we had, I keep going back to this picture, but like the very first named Coffee that we had, we talked about this idea of digital selves. And the idea, if you look at, you know, if you break down your digital self, actually breaking that down, in theory, you could have a digital self that has different agents that do certain things, you know, autumn automatically with your digital self. And now we start to explore the idea of digital selves talking to others digital selves. But in order for that to happen, harvesting what I encounter in the world into some sort of medium that can be, you know, smaller than the Internet of things that are going to be relevant for all of these technologies to actually leverage, to actually understand what Iā€™m like, what Iā€™m encountering in the world. So, like, from a workflow perspective, thatā€™s really something that me and Zia are right now experimenting with. Like, how do I make it as easy as possible that in a moment, if I encounter something, itā€™s just like a boom? Add that to the thing and when I have time, Iā€™ll go later, I think. Jason, you were mentioning that idea. Somebody on this call was mentioned the idea of, like, maybe it was when we were talking about Evan and the visualizer stuff, this idea of, you know, select this note, this note, this note, bump them all against each other and see what insight might come out of it. Imagine if everything youā€™ve encountered over the last two, three, five years was archived in a data set that you control, and you could do that, and if that continued to progress, for the rest of your lifetime. Time, you are essentially just building a, you know, library of everything that youā€™ve ever encountered which is more valuable than, you know, even you had one degree to that. Like, anyway, just thatā€™s. Thatā€™s a thing that I think about all the time with workflows and Smart Connections.

Fran Abenza

Yeah.

Matthew Oatts

And I donā€™t. I donā€™t even know. I donā€™t even know what the green stuff comes of that in the future. But I think where my head is at right now is like, Iā€™m trying to future proof the fact that all of that is rapidly changing. So whatā€™s the workflows to make the gold stuff just start now as opposed to starting two years from now?

šŸŒ“ Brian

I want to just say, number one, the visuals you do are just so incredibly valuable to this meeting. So just like, thank you for doing that. But the second thing is I put this little note down here, right to the left, like, bottom left of synthesized. Like, what do you think about, like an inbox as like, you know, a way, you know, like, just to be clear, like, everything youā€™re saying, like, that is the vision. Like, that is what I see as the future. So, like, thatā€™s what Iā€™m working towards. But, like, now, like, what does that actually look like from an interface perspective? Something Iā€™ve thought a lot about is like an inbox, you know, and of course, because this is how I name everything, I call it the smart Inbox, you know, but like, am I right or wrong to think that, you know, like, after these first three steps, like intake, collection, synthesize, you know, that can all happen in the background is like, you know, is how I imagine it. And then youā€™d be presented with an inbox. Does that sound like Iā€™m on the same page as, like, what youā€™re going tort?

Jason Bates

So I have that already using data view around properties on frontmatter of notes. So like, Matthew, Iā€™ll go and read stuff. It comes in through Readwise or Research Papers through Zotero or, you know, thereā€™s only a couple of ways that it comes in and ultimately through a couple of integrations. Then I have a sources folder that has articles, books, or I use snipped for podcasts as well. And so ultimately I have a group now whenever it automatically creates a note there, the frontmatter has a status tag which is inbox. And then I have a data view page which basically then lists everything in there. And then I can go in, think, oh, thatā€™s interesting, and might create some notes around that. And then when I use Smart Connect, the embeddings on the side. Because as Iā€™m making the notes, Iā€™m like, I really want to connect the knowledge to everything else or to the most useful areas. Thatā€™s, you know, thatā€™s how it starts to get value, where youā€™re like, oh, I donā€™t know. This idea is really similar to this idea and actually like that. So Iā€™m going to create a new summary note that connects to all of them. So almost. And then Iā€™ll bring in. Text generator and say these three ideas are very similar. And Iā€™ll have text generator, you create AI output thatā€™s within a call out, like an admonition, you know, call out. So I always know which is the AI generated content and which is mine. But then Iā€™ll have a dialogue as to how that goes. Pulling in notes or adding wiki links to things from the Smart Connect to the side. So I think the, the inbox is there. I think you can do it, you know, separately and that. But the processing and then how you. I find your tools amazingly useful and super interesting around being able to auto generate some of those notes. But I think one of the things I listed was Iā€™m wondering whether Iā€™m going to need to make my custom GPT to put my frontmatter and some categorization stuff on the notes that Smart Connect generates or that I generate because otherwise I generate notes and then I have to go in and manually add these things. So Iā€™m sorry, there was a lot of stuff there, I guess I was just.

šŸŒ“ Brian

Yeah, no, it sounds like youā€™ve already developed your own sort of inbox user experience, you know. So like I think like where. And you know, like Iā€™m a big fan of Data View, you know, like I use that, you know, itā€™s pretty much whenever I open up Obsidian Iā€™m looking at a data view because that is kind of how Iā€™m hacking together an inbox at this point. My idea is a lot of those different steps that you were mentioning that lead up to you reviewing the Data View. Itā€™s like how much of that can just be thrown together into a background process. The ui, what it looks like is not as much of, you know, like what Iā€™m concerned about as much as, you know, like what is included, like what information, you know, you, you know, you mentioned you use the call outs to show the generated stuff, you know. So like when youā€™re importing things from outside of Obsidian, you know, like is there, you know, a process that the AI, you know, goes through to say, hey, you know, like based on all your other notes, this thing might be more important. Thatā€™s where I was getting at for the Inbox UI to do some pre processing and then display it to you in a way where itā€™s like based on our past interactions, I think that this should be the first thing you look at. Then that would be another important part that right now, at least how Iā€™m using it, thereā€™s no way to really give feedback to the Data View to say, hey, this thing shouldnā€™t be prioritize number one. So then Iā€™m in there trying to write these sorting algorithms and adding more frontmatter to make the right things pop up first. But imagine if that was all just happening in the background because you said, nah, this thing should not be the first priority. And then somehow that. You know, propagates to the future.

Jason Bates

Yeah, I mean the. I love the idea of the pre processing of as my, you know, eventually my read wise notes are all in there and itā€™s already generated not only a summary of those with some of the key points but actually connected to other notes that are in there. And then the prioritization I guess comes to me or I internally prioritize based on the kinds of outputs right at the bottom that I want. So if I know that Iā€™m going to, Iā€™m writing an article on something or a book chapter on something or I want to get something out of it, then it almost the AI needs to know what it is that Iā€™m trying to achieve to know what priorities each of these things happen. Otherwise how do you prioritize? But you know, I love the. Itā€™s almost a research assistant in the background taking your ideas, you know, your on your notes, summarizing, connecting, coming up with new ideas. I put the infra notice question I put down. I played with the tool recently and it does text graph analysis of just single notes. But the interesting thing I think that Matthew was talking about was this idea smushing with this idea it actually looks at an article and says well I can cluster words together in a variety of ways that, that are particular ideas. And then it starts to say well how do you bridge that gap? So Iā€™ve got you know, a big idea. This article has four ideas but these two are connected quite closely. These two not at all. So what is it about? About the principles that are deep within the text thatā€™s quite interesting. Itā€™s a very different approach from. From sort of the smart graph and the Smart Connect stuff. But, but thereā€™s something about how it thinks about concepts by clustering a variety of ideas together and then calling that a concept and then seeing how it connects.

šŸŒ“ Brian

Yeah, Iā€™m going to have to review the infra notice. Iā€™ve seen it in the past but I never really dove in to like how itā€™s working.

Jason Bates

Itā€™s a very strange. It actually itā€™s a very strange tool.

šŸŒ“ Brian

Yeah, Iā€™m going to have to look at it just for sake of time because I know we have a couple more to vote notes here. I put down review Inferno. This is something that Iā€™m going to do by the next meeting but unless anybody has any closing thoughts on the desired workflows, I say we move on.

Evan Moscoso

One closing thought is I just see what youā€™re talking about. With the AI assisted inbox processing everything I feel like thatā€™s all just going to be part of the template. Like whatever template you create will, like, template you add in there. Like the AI assisted inboxing process is how thatā€™s going to start.

Evan Moscoso

Just a thought on that. I just see that being defined in the template.

šŸŒ“ Brian

Yeah, thatā€™s one of my thoughts behind the templates, is that I do see it being. Like a way that we can create these workflows. You know, like, I actually see function calling. Yeah. And like, I see a workflow actually being like a derivative of a template, you know, like, so like a work, like a workflow is really just like a sequence of templates. Right, Right.

šŸŒ“ Brian

But thanks, guys. That was a, that was a great discussion here.

Matthew Oatts

Iā€™ll do one more thing and then we can shift over because. So I know this is a good discussion. I know we have other topics. And Fran, it sounds like he got something else too.

Fran Abenza

No, no, no. I just love the, like, how he said it. Right. Like a workflow can be a succession of templates. I think that, that wholesome. Yeah, I like that future.

Matthew Oatts

Cool. I think Iā€™ve shown this before. But just as youā€™re thinking about templates and going forward, I donā€™t know if you guys. I think most people here are familiar with, like, Ethan Mollick and like his work, but one of the things that I really like is part of his, like, student exercise prompts end up themselves becoming like workflows. So, like, for example, the, the tutoring prompt, if you are working with a tutor on a topic, itā€™s not just supposed to be giving you the answers. What this does is actually sets up a structure of how youā€™re essentially turning the conversation with ChatGPT or Claude or whatever into a, like, itā€™s. Itā€™s nose instructions to then guide when it works back with you working through a topic. And so as youā€™re thinking about these templates, you could follow some of these patterns where imagine you had a prompt, you know, for AI assisted inboxing, inbox processing, and you said, hey, Iā€™m ready to process my inbox. And then it actually had a prompt that was, you know, setting the context, understanding how much time do you know? Imagine if it was the AI assistant prompt, you could call it. First thing you need to ask me is how much time do I have? Do I have five minutes? Do I have an hour? You know, next thing is, do you want to introduce new priorities that have come up, or do you want me to use the priorities I already know that youā€™re working on? Because youā€™re writing a book and you got a client project going on, and you could essentially have the, you could flip the paradigm a little bit and essentially have the GPT function as a thing thatā€™s actually interviewing and working with you through a, through, through a process, though.

šŸŒ“ Brian

Can you drop that link in the, in the action items for me? Because I definitely want to give that one. A further review.

Matthew Oatts

Yeah, yeah, this is. I get it. Sort of. Oh, okay.

šŸŒ“ Brian

Itā€™s already down there somewhere.

Matthew Oatts

Yeah, I added it right there. Oh, perfect. Gotcha. Yeah, itā€™s a cool thing that Iā€™ve. Iā€™ve used before. In fact, the other day, Ziaā€™s on the call. Iā€™m gonna. Iā€™m gonna shout out to you a little bit. Zia was moved to this. Another topic. Ziaā€™s getting some stuff in university right now, and, you know, thereā€™s some chemistry lesson or whatever, and what we essentially did is we used that prompt, uploaded a chemistry, you know, lab. And that prompt essentially turned into a tutor that could work through that labā€™s context and ask Zia questions to test for his understanding of the topic. So I think, I mean, Zia, were you connecting that dot as you were listening to this conversation when we talked about tablets? Yeah.

Zia Noori

Yeah, it was very useful. Cool.

Matthew Oatts

Thanks, Zia.

Zia Noori

Yeah, that saved my grade on that one.

Matthew Oatts

Nice. All right, other two topics here for the last 10 minutes. Smart Connections inside Quartz. I have no idea what that means, so explore that.

šŸŒ“ Brian

So I know what that means, but Iā€™m sure that that was Fran. So did you have anything you wanted to elaborate on that? Because I do have that in the plans.

Fran Abenza

Great. That is great to. That is really great to hear. I think that it was, like, I was really surprised, you know, when you started. I didnā€™t know, by the way, that there was a docs for Smart Connections, literally on Quartz.

šŸŒ“ Brian

Yeah.

Fran Abenza

Since it is new. Yeah. Also itā€™s not really well indexed on Google at all. What is Quartz is just a way to publish, like, how do you call it? Obsidian published, just open source, and you can just create pretty much a nice way to share your notes with the world. It creates a website, letā€™s say a static website, with your notes.

šŸŒ“ Brian

Yeah. And the important part about it for me is that itā€™s customizable, unlike the Obsidian publish, which Iā€™d love to use because itā€™s very convenient, but I just need to be able to do more. One of those things is I want to be able to have, like, the Smart Connections interfaces inside the website. And, you know, just so, you know, like, as far as progress on that is Fran, like, I definitely, you know, thereā€™s some, like, you know, key components, like, you know, like, are we generating embeddings, you know, like, and including them on pages? Like, Iā€™ve actually already started, like, working through a lot of the problems that, you know, would be required to solve to get Smart Connections on Quartz. And Iā€™m actually highly motivated to do that because I think having Smart Connections in the documentation for Smart Connections actually shows how the software is supposed to work. So then this way somebody who may not, like, be very familiar can actually see, like, oh, this is what it can do.

Fran Abenza

Yes. Because at the moment, I must admit that as much as I love quirks and the documentation, I think it is a great idea to have. What crosses my mind when I enter for the first time to the docs of Smart Connection is like, oh, this looks like a complicated tool. Like, youā€™re abstracted from Smart Connections. Right. Because you see so much documentation. This must be hard to use. Which is not the case. Yeah, the documentation.

šŸŒ“ Brian

Whatā€™s that?

Fran Abenza

No, no, what I wanted to tell you is like, if youā€™re actually actively already working towards that, I, like, I can give you a link to my calendar and we can just collaborate, like, you know, like maybe put one hour a week or two hours a week or something like that and push to make that happen. I think it should be easy somehow because they share technology, right?

šŸŒ“ Brian

Yeah. So it should be relatively easy of an implementation. Thereā€™s a few other things that I have that I see as more critical right now just because itā€™s like what the. Itā€™s just like right now thereā€™s a lot of broken things.

šŸŒ“ Brian

So I just want to get everything working cleanly. But yeah, maybe, you know, like, if I donā€™t have it out, you know, by the beginning of the next year, like definitely reach out to me about that and, you know, maybe talking to you will help me, you know, get my head around what exactly needs to be done to get it out quickly. But I do think itā€™s going to be a relatively easy implementation because for those documentation pages, all of that is in my Obsidian. So that means all the embeddings are already there. So itā€™s really just like adding this next step of, you know, like compiling the necessary embeddings so that, you know, the in or the, you know, like cosine similarity basically can be run, you know, client side or, you know, and you know, or if it like whether or not that should be the case, you know, Iā€™m still deciding.

Fran Abenza

Yeah, I think that all the components do make sense. Right. Like to have the Smart Connections, like list of similar semantic similarity for sure. But as well as the chat. Right. It would be. Just makes so much sense to have these two components and I guess that I wanted to. Very quick side note, I realized that at the beginning of the smart search when you had this feature, you could just type and then after some milliseconds it would do the search. Right. And now you have to click the search okay, yeah, yeah button. I miss the other thing. I really do.

šŸŒ“ Brian

Thatā€™s like. Thatā€™s like three lines of JavaScript, maybe less. So Iā€™m writing a note down right now, just auto submit in lookup pane.

Fran Abenza

Yeah, Iā€™m calling it Confuse some people maybe, I donā€™t know. But I think that it was. Itā€™s intuitive and at least there should be a shortcut to search rather than clicking with the trackpad. Yeah, that just kind of crossed my mind. Itā€™s like a pain point. Yeah, definitely.

šŸŒ“ Brian

Because I feel like this is just me and you, Fran. Letā€™s move to the next topic.

šŸŒ“ Brian

But yeah, we can definitely follow up about that.

Fran Abenza

So how about this?

Matthew Oatts

We got five minutes left. Evan, Brian, I added this sticky note here because you guys were talking about this. Can you give us some, some maybe pump us up around whatā€™s around the corner with respect to you said visualizations is the future.

Evan Moscoso

Yeah, all right. I mean, Iā€™ll take that. I mean, itā€™s around the corner. I mean around the corner. Debatable. I mean, Iā€™m working on it, but all like now that I moved this one more time. But like that is the future of like, like, not just like how we think about things, but even artificial intelligence. Iā€™ll send this chat thing to you guys. Iā€™m not sure if you guys know who Fei Fei Ling Li is, but she is like an entrepreneur whoā€™s worked heavily in the AI space and sheā€™s pushing the frontier of visual spatial intelligence. And the reason is because like with AI, like all, theyā€™re all language models. Like youā€™re trying to communicate to an AI a three dimensional world using text and words. Like, how much could it truly understand of what our world is, you know, through that? But itā€™s not just that. Itā€™s also with humans as well. Like with what Matt has going on here, would it be better if we had this all written in text documents for us to look through, or is it way easier for us to comprehend and use logic with everything? Itā€™s like playing chess. Like, letā€™s play chess and I have the chessboard. You know, people do that in their heads night to D5 and you just have to remember everything and then, you know, whatever. Like the amount of logic that you can use with your limited brain capacity is like physically impossible for complicated things. And even the language we use, the words that we use, even if I mean to say them, arenā€™t as accurate as the thoughts that I have in my head. Like nine times out of ten, thatā€™s like the main conflict of like the majority of the world, I feel, is miscommunication through words that they say, through how they write them through all the text. But if you have a visual aspect that is something that is able to make you understand and use logic with things at 10 times better rate because youā€™re offloading the memory that you need to retain your head onto a visualization. Now itā€™s like, all right, instead of thinking about all the chess pieces on chess boards, you move that chess piece. Not to think of every single chess piece that moves with that. I can just see it, I can just move it and see everything move automatically, dynamically update. So thatā€™s what I mean is the future of that is like youā€™ll be able to see everything change versus like if you update some text, itā€™ll update some notes that you have. But like, I mean, how much is that really gonna help? And so thatā€™s where I think the visualization is gonna completely change everything. And also on top of that, you could send the AI your map. Like if I have a map of all the different connections, I can send it that. So instead of it hallucinating, because even if you have embedding and what have you, thereā€™s still the chance of hallucinating, coming up with stuff. With this at least youā€™ve now created a visualization that is static that the AI canā€™t mess up. And so you both have something thatā€™s grounded every time you have something, set this visualization in now with whatever I want, keep adding to it. Thereā€™s so much more I can say about this, but I think thatā€™s just like a spirit. Sneak peek of like, itā€™s just, itā€™s just exponentially better than having everything in text because we can understand and comprehend so much more. Even the AI can as well. So I think, I really think the next jump, at least with AI will be with what Fei Lingā€™s working on or Fei Fei is working on in terms of visual facial intelligence. Because, like, I mean, honestly, like, thereā€™s no way you can understand 3D world just through text and audio. Like, and even the visual aspect of the images they give you, how that works is that they just process the image into text that gets sent into AI. Itā€™s not the AI season. So, like, it canā€™t understand. Like, this is this many pictures away from this and this is this distance. But if it could, the amount of information and knowledge it would know would be exponentially greater. So, like, this is, I mean, like, thereā€™s more. Itā€™s funny you mentioned that.

Matthew Oatts

Yeah, itā€™s funny you mentioned that because that is almost in terms of how I use AI in the workplace. Iā€™m a business consultant and I do like transformation, change management, like helping clients go from strategy to execution. And a lot of the mediums that I work with are by definition already visuals. Because I have to communicate. Most of the success in my field has to be visual communication of complex things and simple forms to humans. Well, if I need to rapidly get an AI to assist me in some of that activity, the quickest way for me to do it, you know, with the right parameters, from a security perspective, is screenshots of those visuals that I can put into Notebook LLM or a Claude Project or ChatGPT and then say, using this context. And it is fascinating how much more and more effective the LLM models are at just using that to seed things. I think what Iā€™m hearing you describe, and this is a good way to maybe close this out here, is that doesnā€™t have to continue. This is a scientific word that doesnā€™t have to continue to be kludgy. Like, it doesnā€™t have to continue to be awkward. Actually using visuals as a first order medium to, you know, build out that shared understanding with AI and then having AI be able to produce things that arenā€™t absolutely horrendous visuals, you know, as part of the conversation, because those never work, at least so far. They donā€™t work very well would be.

Matthew Oatts

Would be pretty cool, right?

Evan Moscoso

Because, like, I mean, itā€™s seriously like pictures, 1,000 words. So itā€™s. Instead of typing out your freaking long essay prompt, you just send a picture, and it has everything. So, like, and especially now with the ChatG app, like, theyā€™re going to be having video where even, like, you know, you have your app. But then eventually, when it gets the glasses, your prompt is going to be the video of whatever youā€™re seeing.

Evan Moscoso

So it has the context of everything. So the questions you ask it, because the prompt is so rich with data, thereā€™s so much data in that prompt, the answer it can give you is going to be 10 times better than if you were to describe it, like in a text or in a document.

Evan Moscoso

So thatā€™s where Iā€™m saying, especially even with a graph that we can generate with Smart Connection Visualizer. Granted, itā€™s the very beginning. Itā€™s. Not where we want it to get, but eventually that should be able to. Even.

Jason Bates

Even writing a white paper or writing a document or something. I tend to use canvas to lay out the ideas that are all connected and then you move the pieces until you describe it in a.

Matthew Oatts

Connecting. What youā€™re saying, all you guys are saying, youā€™re saying. Being able to use canvas or a graph to describe a concept as the output of the interaction is going to be far more effective at getting across the outcome that youā€™re looking than having a text prompt. Like, thatā€™s what you guys are talking about as like a future thing. Being able to communicate with the currency of visualizations as opposed to text. Okay, cool.

Evan Moscoso

Exactly. So, like, to give you guys a little background,

Matthew Oatts

I gotta wrap up the call, by the way.

Evan Moscoso

Oh, yeah, sorry, sorry. Go ahead, go ahead.

Matthew Oatts

Yeah, this is great.

Evan Moscoso

So, like, real quick. I work on visualizations, like, a lot, like, for my job and everything. And itā€™s like the ability. Not just because, like, all right, I want to filter through this, I want to filter that. So you can cluster and group things through all these different ways, find all these just different connections very easily by being able to manipulate the visualization whatever way you want. And you can even do that with a prompt, like, give me all the times that I like, all the times that I, like, progress further in my future, like, or my career, where all the notes that I have about that and stuff like that, and then itā€™ll be able to aggregate and visualize and see. Oh, but this node is connected to that. You know, itā€™s. Itā€™s all. Itā€™s all there.

šŸŒ“ Brian

But Iā€™m very excited for the sake of time. I. I appreciate the insights, Evan. Definitely. I want to see you at the next one so we can dive deeper.

šŸŒ“ Brian

Thank you for everybody for participating. Thank you, Matthew, for the wonderful job you do and, you know, contributing your time to make this happen. You know, these calls are incredibly valuable to me. I love them. So thank you, everybody. My pleasure.

Matthew Oatts

Thanks, everybody. See you next time. Thank you.

Fran Abenza

Take care, guys.

Zia Noori

Thank you, Brian. šŸ˜ŠšŸŒ“