Is Tauri the Electron Killer?
Table of contents
- Tori revolutionizes app development by merging web skills with robust performance and security, making it easier than ever to create small, efficient applications for both desktop and mobile.
- The shift to mobile has transformed app development, making it faster, smaller, and more secure, while also highlighting the challenges of updates and data costs for users worldwide.
- In a world where software accessibility can be a lifeline, building apps that are energy-efficient and resource-preserving is not just smart—it's essential.
- Great programmers can build powerful tools that bridge Rust and JavaScript, transforming how we develop apps across platforms.
- Building a browser in Rust opens doors to unprecedented performance by sharing memory between the backend and frontend, but navigating the experimental landscape requires patience and careful planning.
- In a world where tech giants control the tools we use, true innovation comes from community-driven projects that prioritize user empowerment over profit.
- Building a community-driven project means embracing collaboration and simplifying complex tasks, so everyone can focus on what truly matters.
- Building mobile apps with Tori is a journey of collaboration and adaptation, leveraging community-driven solutions to bridge the gap between native APIs and JavaScript.
- The strength of a community lies in its willingness to help each other grow and innovate together.
- If you're willing to put in the time, learning to code isn't just for the experts—it's for anyone ready to embrace the challenge.
- If you commit to learning and put in the effort, you can achieve more than you think, regardless of your background.
- Innovation is essential; without it, even the best projects risk fading into irrelevance.
Tori revolutionizes app development by merging web skills with robust performance and security, making it easier than ever to create small, efficient applications for both desktop and mobile.
Welcome to Syntax! On today's supper club, we have a really good one for you. Daniel Thompson is here to talk all about Tori, which is just hitting its version two and is currently in release candidate. We will be discussing what Tori is, the motivations behind it, its potential uses, and delving into the nitty-gritty details.
My name is Scott Tulinsky, and I'm a developer from Denver. With me, as always, is Wes Bos. "What's up, Wes?" I asked. Wes replied, "Hey, I'm excited to talk to Daniel about Tori! I know you just wrapped up a week working with it, and we've been discussing it for what seems like over a year now. So, I'm stoked to have him on." I added, "I've done a lot of Tori projects, so I have many questions and thoughts here. I'm a big fan of the project. Welcome to the show, Daniel! How are you doing?"
Daniel responded, "Great, thanks! You know, I'm in the hot seat, but that's because I'm in Malta. It's about 30 degrees here; however, I'm in a kind of tempered basement. I've heard from friends of mine across the Atlantic that it looks like the walls are made of blocks of butter. They're not; they're limestone. Oh wow! But I'm doing great and happy to be here to talk to you all about that fantastic T stuff."
I then asked, "Do you live in Malta, or are you just vacationing?" Daniel replied, "I live here! I actually moved here a few years ago and fell in love with the climate. I have no allergies here whatsoever, which I can't say for the other 45 years of my life. I actually started a company here and run it out of Malta with a whole bunch of amazing people from around the world."
Wes chimed in, "Wow, that's impressive! Let's kick this off with a little bit about Tori. Do you say 'T' or 'Tori'? How do you pronounce it?" Daniel responded, "Wow, you know, my friends at Snick also say 'sneak.' It depends on what part of the world you're from, whether it's Israel or California. I probably even said Israel wrong! So, you know, the important thing for me is that people understand what it's all about. Names and the ways we pronounce them don't matter as much. I say 'T,' you say 'T,' tomato, tomato, right?" Wes added, "Awesome! But you did say 'tomato,' you're right."
I continued, "So, do you want to give the audience, maybe someone who's never heard of it before, a brief overview? They might not have heard us talking about it or just never seen this project. What's the deal with it? What's it doing?" Daniel explained, "Sure, that happens a lot. I think the three-sentence pitch is that Tori helps you make really small, performant, and secure applications using a system web view. If I've already said too many words that people don't understand, I'll start from the very top. You need a user interface, and Tori helps you make that using the skills that you might know as a web developer on the front end. It provides you with an application protocol interface, an API, that lets you then communicate with the core of a Tori app, which is traditionally written in Rust. However, there are projects that allow you to do it in Python, and if you're adventurous, heck, why not even write it in C?"
I asked, "So basically, what you're making are apps with Tori, right? You're making desktop apps, and now with version two, you're making mobile apps as well, right? Is that what version two is all about, or is there more to it?" Daniel replied, "There is a lot of work under the hood to make Tori even more performant and smaller. In the course of our design decisions leading up to the 2.0 release candidate, we felt it was important to include the mobile ecosystem. Some people have accused me personally, you know, in a way, of doing it for the upvotes on news or GitHub. But actually, it's a really interesting architectural strategy. Once you think you know a system really well and you provide one way for people to interact with it, you think that is the way. In the course of integrating Tori into Android and iOS, we discovered that the way we built for desktop just didn't work. We actually had to fundamentally rethink a lot of the things we were doing, and what that led to was an even more secure system for users. Now they actually have to opt into and grant permissions on a very granular level to the various subsystems they need, whether that's the camera or file storage."
I noted, "Right, the move to mobile actually brought with it a lot of other benefits to the ecosystem. I think another concern people had with 1.0 was the method by which you send messages from the user interface to the core. It's like interprocess communication. We call it the way that we had been doing it in the beginning. Let's just say it left room for improvement, especially when you wanted to do things like stream data from the back end. Every blob has to be serialized, then it has to go across the border, transferred over to the front end, and then it has to be deserialized and integrated into...
The shift to mobile has transformed app development, making it faster, smaller, and more secure, while also highlighting the challenges of updates and data costs for users worldwide.
Actually, users have to opt into and grant permissions on a very granular level to the various subsystems that they need, whether that's the camera or the file storage. The move to mobile actually brought with it a lot of other benefits to the ecosystem.
I think another concern people had with 1.0 was the method by which you send messages from the user interface to the core, which we refer to as interprocess communication. The way we had been doing it in the beginning left much to be desired, especially when you wanted to do things like stream data from the back end. Every blob has to be serialized, transferred over to the front end, and then deserialized before it can be integrated into whatever is reading it, whether it's a video player or an audio stream. We put a lot of work into verifying that we are now, I think, 10 times faster with 2.0 compared to 1.0.
I remember way back when we were writing up the list of things that we thought were important to qualify 1.0 as being stable. One of those was having an updater that lets people update their apps. For beginners, the thought process often is, "Oh, I made an app; I'm done." However, it's actually an infinite game in the context of software development. After the release is before the release, and you have to figure out a way to get that updated code to the users in a secure way.
That was one of the first occasions where we felt we found a better solution than what Electron was doing. Our friends in the security community found a vulnerability that we were then able to sidestep because we are now forcing people who make updates to provide a tower signing key to sign the actual binary. This way, when an update comes, it can prove that it's coming from the same authorial provenance.
Those are all tough problems. Unlike websites, where we can just push a new update without worrying about signing and deploying, it's a whole different world with apps. We’ve come to realize that things like integrity checksums are important when pulling source from remote locations. With the whole module federation concept blowing up everywhere, it presents a significant opportunity to hot push to running apps. I believe that the security model can always be improved, and perhaps there are ways for people using module federation and Tori together to get the best of all those worlds.
Now, talking about Electron, people listening right now might be thinking, "That seems cool; maybe I've built an Electron app in the past." The real benefit to using Tori versus Electron is that it's not huge. The joke with Electron is that you download an entire thing, and we end up running these massive Chrome apps on every single desktop. Before you know it, you might have eight instances of Chrome running, and I’m probably running them right now.
Is that fair to say? Obviously, one of the benefits of shipping a T app is that your binaries are 3 to n megabytes in size. Until you start adding large language models or shipping lots of images, the smallest app we ever built was around 500 kilobytes. This is particularly appealing for people doing hobby projects or using GitHub on weekends for their free time. They don’t expect to get 100,000 downloads a month or a million or more.
However, once you reach that point, you realize that GitHub isn’t actually a CDN, and problems arise when you start putting those binaries in an AWS bucket because someone has to pay that bill for the transit, and it’s going to be you.
I never thought about that until I considered how apps like Discord update. Discord updates every time I open it, downloading around 300 megabytes or something like that. That’s expensive! I don’t know about you, but I travel quite a bit because I live in Malta. I usually leave the island by plane and often forget to turn off data while traveling. I live on my phone during these trips and sometimes don’t even bring a laptop. I can get everything done, but then I’m at the airport, and LinkedIn says it needs to download 150 megabytes.
I mean, I’m not going to complain about the LinkedIn app; I think LinkedIn is a wonderful service. However, in Switzerland, I’m paying roaming charges, so I would ultimately pay 15 EUR to download the update for LinkedIn right there. The problem gets compounded when you start thinking about the accessibility of software engineering in regions like Kenya and India. Kenya, in particular, is notorious for not having reliable internet access.
In a world where software accessibility can be a lifeline, building apps that are energy-efficient and resource-preserving is not just smart—it's essential.
I do travel quite a bit because I live in Malta. I have to leave the island somehow, and usually, it's with a plane. I always forget to turn off data while traveling. I use my phone for almost everything when I'm on the go; sometimes, I don't even bring a laptop. I can get everything done, and then I'm at the airport in Suk when LinkedIn prompts me to download an update that is, I don't know, 150 megabytes.
I mean, I'm not going to complain about the LinkedIn app; I think LinkedIn is a wonderful service. However, in Switzerland, I'm paying for roaming, so I would ultimately pay 15 EUR to download that update right there. The problem gets compounded when you start thinking about the access possibilities of software engineering in regions like Kenya and India. Kenya, in particular, is notorious for not even having 4G, and people are often paying for traffic directly.
In regions where there was a skip, where there was never a laptop generation, everybody is on their phone. Finding and providing a methodology by which people can create cost-effective, energy-efficient, resource-preserving applications can, in some cases, even turn into life-saving events. I think that the claim that Electron is just free real estate has kind of been debunked, but there are a couple more nuances to consider.
You mentioned that it ships a Chromium instance. Which Chromium instance does it ship? Well, I can tell you it's not the latest Chromium; it's always behind. When you produce an Electron app and ship it to your customer, chances are good that by the time it's shipped, there are already end-day exploits out there for that version of Chromium that your users are running. What that turns into is a security whack-a-mole. If you are shipping an Electron app and you have some kind of CSO in your team, then they are probably justified in requiring that you have a milestone of shipping an updated Electron app, probably that same day.
Given that Europe, the US, Japan, and others are starting to get really concerned about the security of our software, having a framework like T that is regularly audited for minor releases and externally audited for major releases is a real positive way to gain confidence in the framework itself.
I don't think I ever thought about that. I'm curious about one of the huge benefits of something like Electron: the backend is in Node. I feel like that makes it very accessible to a lot of regular developers. However, on Tori, if you want to write the backend, it's in Rust, right? Most people think, "Oh, I have to learn a little bit of Rust." I built an app in it about a year ago, and I had to learn a little bit of Rust; it was kind of fun. Is there ever a time when you could also have a Node backend as well?
Let me tear apart your points here for a second. Node? Probably not. Deno is something that's in research right now. I'm trying to remember the name of that graphics modification library that everyone is using in Node.js. There's this one very popular one—Sharp. Doesn't Sharp require some kind of post-install compilation phase inside of the runtime because it's not written in JavaScript?
Yes, we just had Ryan on, and he's like, "That's the stupidest thing we could have ever done: post-install automatically run code." Ignoring the absolute security horrors of having your developer machines compromised, my point is that a lot of modules are actually Node modules themselves. The really powerful ones are written in something like C or C++.
While you can use Rust, we've tried to take every single pain point out of the Rust equation. You can write your menus in JavaScript, trigger all of the low-level systems, like sending a notification to the user, and use the HTTP service. For example, you can control all of this from JavaScript land, from the client, right from the web worker.
I think the paradigm shift that's important for me to make people recognize is that there's always a better programmer than you out there. It's really easy for great programmers to rapidly build plugins that have not only ways to consume them from Rust but also from JavaScript. Technically, we won't accept anything into the official library of plugins unless it offers both opportunities.
So, you're saying we could build a good chunk of our apps entirely in JavaScript? It's not like even the file system doesn't have a JavaScript client-side API, right? You can call the file picker for the...
Great programmers can build powerful tools that bridge Rust and JavaScript, transforming how we develop apps across platforms.
The HTTP service allows you to control various steps and cores directly from the client side, specifically from the web worker. This capability represents a significant paradigm shift that I believe is crucial for everyone to recognize: there is always a better programmer out there than you. It's relatively easy for skilled programmers to quickly develop plugins that can be consumed from both Rust and JavaScript. Technically, we won't accept any plugins into the official library unless they provide both opportunities.
This means that we could potentially build a substantial portion of our applications entirely in JavaScript. For instance, even the file system has a JavaScript client-side API. You can call the file picker for the operating system you're using, select a file, and that value is then returned through Rust, which bubbles back up to the JavaScript side.
When we were working on our Tori app during hack week, we developed a syntax production assistant. One of its functions was to run FFMPEG to generate MP3s. This was achieved through Rust, but it wasn't overly complicated; we simply invoked a message, accessed a file, and executed that sidecar. The Rust components were manageable, and we didn't have to implement everything in Rust.
I'm curious about how Tori works across multiple platforms, including Mac, Linux, Windows, iOS, and Android. Each of these systems has its own web view; for instance, Safari utilizes WKWebKit. Given the diversity of APIs across these systems, how do you manage to test the surface of all these different elements? Do you have a setup with a Mac, Linux, and Windows computer to hash everything out? What does the actual process look like?
We utilize a combination of lots of VMs and physical hardware. Additionally, we have several automated test suites that help us verify whether a particular API is available up to a certain version of a system web view. However, the lonely child among the five quintuplets is Linux. The Linux ecosystem is vast and diverse, and we discovered a library created by our friends at Aalia called WebKit GTK. One significant challenge with this WebKit distribution is the lack of proper WebRTC, which is essential for client communication, particularly for video and audio streaming. Although some individuals have built their own solutions, this was never something our team could support.
Another example is with Safari, where the version of the web view is tied to the version of Safari that is currently installed and running. There were previously some complex hacks to circumvent this limitation, but they proved unsustainable. Interestingly, the team behind the WebView project at Edge has made significant progress in this area. They have developed a system where our applications check if WebView 2 is installed. If it isn't, the application contacts Microsoft to request the installation of WebView 2 for that machine. This version is then installed and enrolled in a rolling release, meaning that every time a new release is made, the device will download and use it. Users also have the option to install a pinned version.
From a nuanced perspective, none of this is particularly ideal for connoisseurs. We recognized these challenges back in 2019. At that time, when our core team consisted of just four people, we even attempted to compile Servo.
To explain, Servo is a project initiated by the Mozilla research team. The goal was to explore the potential of using the Rust programming language to build an entire browser. Unfortunately, due to funding changes, that department lost its financial support, leading to the team being let go. As fate would have it, Firefox was already utilizing some of those libraries and needed someone to maintain them. Over time, our organization adopted a few of those Servo libraries and committed to maintaining them for the broader community. However, we always felt there was a better opportunity out there. We eventually received funding from a Dutch nonprofit and the European Commission's NGI Next Generation Internet Fund, with the nonprofit being called N Onet, which supported our efforts.
Building a browser in Rust opens doors to unprecedented performance by sharing memory between the backend and frontend, but navigating the experimental landscape requires patience and careful planning.
The discussion revolves around the utility of leveraging the Rust programming language to build a browser. As fate would have it, the search bar team decided to switch funding, resulting in a period where that department lacked financial support, leading to the team being let go. Interestingly, Firefox was already using some of those libraries, and they needed someone to maintain them. Over time, within the organization, the project adopted a few of those Servo libraries, which they helped maintain for the rest of the community. However, the team always felt there was a better opportunity out there.
They eventually received funding from a Dutch nonprofit and the European Commission's NGI Next Generation Internet fund. The nonprofit, called n onet, supported them in conducting initial research to test if they could use a Servo-type window together with their existing framework. This research proved successful, and they continued their development work. Today, that project has evolved into a standalone open-source community known as Verso (V RSO), which utilizes the Servo engine to produce a browser binary.
Despite the overlap with the Servo team, they are working on building a custom purpose-built web view. This web view aims to follow specifications as closely as possible while providing shared memory. This shared memory feature means that the user interface and the backend can share the same exact types, allowing for extraordinary performance. For instance, if a massive array is sent from the backend to the browser, it can be shared instead of duplicated, which would otherwise consume a lot of memory.
When asked about the likelihood of this experimental web view making it into the mainstream, it was noted that it will likely remain as an experimental web view for at least the next six to eighteen months. The risk of introducing something experimental is considered unfair to the tens of thousands of developers currently building with the existing technology. However, for Greenfield projects, a unified user interface that works across all five platforms, without relying on Chromium, is seen as a valuable opportunity.
The conversation also touched on the shortcomings of existing web views, particularly the absence of critical web interfaces like WebRTC. It was emphasized that calling something a web view while missing essential features is problematic. The speaker shared personal experiences with issues caused by the Safari approach, where the Safari web view (WKWebView) was tied directly to the Mac version. In one instance, the web view had not implemented the screen sharing dialogue, which was only available in Safari proper. This limitation forced the speaker to wait for an update to both Safari and their computer before they could implement the desired functionality.
The discussion highlighted the challenges of the Apple approach, where some apps are tightly integrated with the OS, leading to uncertainty about whether users are updating their systems. This contrasts with the flexibility of choosing the version of the web view. Another issue with the Safari-based web view on macOS is the lack of a testing harness available for the web view, which makes it difficult to implement web driver interfaces. The speaker expressed a desire to discuss these issues further, emphasizing that the solutions are not overly complicated.
In a world where tech giants control the tools we use, true innovation comes from community-driven projects that prioritize user empowerment over profit.
The discussion begins with a reflection on Safari and the challenges associated with it. The speaker notes that the weird thing about the Apple approach is that some apps are so closely tied to the operating system that users often find themselves waiting for updates, unsure if their users are keeping their systems current. This creates a tough world compared to having the ability to choose the version of the web view.
Another issue arises with the Safari-based web view on macOS, where it seems that someone decided not to make the testing harness available for the web view. Consequently, web driver interfaces don't exist for the web view, which the speaker finds baffling. They express a willingness to discuss this further, stating, "Hey guys if you're listening out there, I'd love to talk to you about this. It's not that hard; you know, ring me up. You'll find me anywhere you look."
The speaker emphasizes that they built T from first principles of giving people the power. They acknowledge that while the operating system is the best option for secure updates—believing that Safari is going to do a better job of shipping secure updates—there are still nuances that are becoming pain points for users. The team decided it was important to delve deeper into these issues, as it appears that no one else is addressing them. After five years of waiting, they concluded that the situation has not improved.
The speaker humorously suggests that if you give someone a cookie, they will likely ask for a glass of milk. They reveal that there are experiments underway to create a bootable microcontroller that would instantly enter kiosk mode, running a Verso-based web view controlled by T. They express enthusiasm about this development, asking, "How big would that be? Could you run that on some pretty small hardware?"
They mention that they are targeting the ST32, a tiny ARM chip commonly used in low-power cell phones and industrial applications. The speaker highlights that the IP is very clear, and the chip is easy to acquire and manufacture. They note that if one has a large order, it's possible to print their own ASIC on the expanded version of the 32 series, allowing for the integration of custom logic onto silicon. While they clarify that they are not suggesting they will pursue this route, they are excited about the research being conducted.
The conversation shifts to the Tory project, which the speaker describes as vibrant and always in motion. They have been following its progress for a long time and appreciate the continuous hard work being put into it. However, they express curiosity about how the project sustains itself financially, noting that there is nothing on the website indicating a paid product.
The speaker explains that the project is a purely open-source initiative hosted by a Dutch Foundation, where no one involved receives any monetary compensation. It operates on a volunteer basis, with a few companies, including the speaker's, covering the salaries of senior engineers. They made a conscious decision early on that this would not be intellectual property that could be privatized. The speaker emphasizes that it is legally impossible for any individual in the community to change the license. If the board of directors attempted to privatize the project, the working group would likely revolt, leading to a vote of no confidence and the removal of the board.
This structure represents an ideal version of a community-driven project. However, the speaker acknowledges that people have jobs and personal lives, which can make it challenging to dedicate time to open-source work. They mention that they raised funds from a notable VC, Joseph Jax from OSS Capital, along with a group of amazing angels, to complete version 2.0 of the project. They dedicated considerable time, effort, and resources to build products that serve the T ecosystem.
Finally, the speaker highlights the importance of the auto updater service, which they recognized as a key challenge for users. This service hosts the latest and previous versions, provides release notes, and supports all operating systems, ensuring that users can easily access updates.
Building a community-driven project means embracing collaboration and simplifying complex tasks, so everyone can focus on what truly matters.
The community-driven project that we could think of is quite significant, especially considering that people often have jobs alongside their open-source contributions. Many individuals engage in open source as a hobby, but as life progresses, they acquire families, kids, and other hobbies. This reality makes it challenging to dedicate time to open-source projects.
Fortunately, we raised money from an incredible VC, Joseph Jax from OSS Capital, along with a host of amazing angels who supported us financially to complete version 2.0. We dedicated a substantial amount of time, effort, and resources into this endeavor to build products that serve the T ecosystem. For instance, you may have worked with the auto updater service, which is one of the key solutions we developed. We recognized that setting up an auto updater service can be a daunting task, as it involves hosting the latest and old versions, managing release notes for all operating systems, and providing updates in a seamless manner.
Our goal was to create a service that is so painless that users would only need to visit the website twice, set up the CI once, and check their credit card invoice at the end of the month. It’s crucial to get this right; it's akin to the last part of running a marathon or building a desk. If you forget to hydrate during a marathon, you won't finish; similarly, if you neglect to treat the wood for your desk, you may end up with termites. These are lessons learned from experience, and for those who have never navigated this process before, it can be time-consuming and costly. Startups, in particular, often operate on tight schedules, burning the candle at both ends. Thus, if we can help them set up their systems in just 5 to 10 minutes, with support from our team if needed, it becomes a significant win for the broader community.
The main service that Crab Nebula provides is a cloud platform where users can store, ship, and update their assets. Additionally, we are working on features to help prove compliance with new European regulations. To clarify, it is Crab Nebula (crabnebula.dev). While I don’t want this to sound like an advertisement, what I truly appreciate about it is that it simplifies the necessary tasks you would have to undertake anyway. Sure, you could manage without Crab Nebula, but it wouldn’t be as enjoyable or efficient.
Regarding the funding of Tori, we have discussed how we secured funding, but I want to emphasize that we will always be involved in its development. The risk of relying solely on one company is that it may appear as if everything is being pushed by that entity. While we are heavily involved, the contributions and participation from the open-source community are essential. Without this collaboration, Crab Nebula wouldn’t have been able to achieve its goals alone.
It’s worth noting that the T community's working group has officially approved Crab Nebula as a partner, which is beneficial for us. We have collaborated with the working group to compile information in the T documentation, guiding users to avoid rolling their own updates and instead check out our service. If they encounter issues debugging IPC calls, we recommend using Crab Nebula's Dev Tools. This approach is similar to how Expo shares its services, and we have modeled our strategy on existing successful frameworks.
For those looking to monitor errors in their applications, we suggest checking out Sentry at sentry.io/syntax. It’s crucial not to deploy a production application without visibility into potential issues that may arise.
Now, I’m curious about the iOS aspect. When you need to go native, such as interfacing with native APIs that Tori does not provide—like calendar, native maps, or Bluetooth—the idea is that there is a large community, similar to React Native, that builds bridges for these functionalities. Writing Swift is just another programming language, much like Kotlin. To access Swift from your JavaScript running in your Tori app, you would create a bridge, and we would write that bridge in Rust. This way, the bridge becomes available to Rust, facilitating seamless integration.
Building mobile apps with Tori is a journey of collaboration and adaptation, leveraging community-driven solutions to bridge the gap between native APIs and JavaScript.
Not even knowing it, let's head on over to sentry.io. We've been using this tool for a long time, and it totally rules.
Now, I'm curious about the iOS part. When you need to go native, meaning that you need to interface with some sort of native API that Tori does not provide, such as calendar, native Maps, or Bluetooth, the idea is that there is a large community of people who will build bridges for that type of thing, similar to React Native. Writing Swift is just another programming language, just like COT. So, how would you access Swift from your JavaScript that's running in your Tori app? You would do this with a bridge, and we would write the bridge in Rust. This way, the bridge is then available to Rust, but through the JavaScript API that the command functionality you invoke and respond provides. This allows us to communicate directly with the app itself.
Recently, for a client, we built a mapping app that leveraged UNLocation and haptics. So, when the ride arrives, you get a buzz. That was all custom T plugins that we wrote, which are open source for everybody. There is so much prior work and good stuff out there, and thanks to the open-source community, we can learn from where people went right or wrong. NativeScript, Capacitor, and React Native have all dealt with these problems in one way or another, and at the end of the day, it's just in.
So, what was the hardest part of putting mobile into Tori? It had been around with desktop for a little while, and I know you had to rewrite a considerable amount of things. But getting mobile applications to work well with Tori, overall, what was the hardest part of that whole process? Everything that happens after it's compiled interacts with the entire developer ecosystem provided by Apple and Google. Interacting with those services is tricky because, first of all, you want to automate everything you can. Secondly, you also need to have a human in the loop in certain places. You can't just wildly release an accidental update to every device in your fleet; there has to be a human involved.
I think we have worked hard to keep it not absolutely necessary to use the tools provided by the respective ecosystems, but sometimes you just have to, and sometimes it's just more efficient. An interesting comment I've heard people say about Tori is that you can write code in one way for all the platforms. If you don't know Rust, for example, you get this amazing feature called conditional compilation, which means it detects the target operating system and hardware that it's building for and gets rid of everything else. For instance, if you're building for Mac OS ARM, you're not going to build all the stuff that you might need that's slightly different for Windows.
The trickiest part that we faced, Scott, to come back to your question directly, was that in order to run a Tori app in those contexts on mobile, we had to change to a library approach inside of Rust. This means you have a different way of compiling it and then interacting with it. Ultimately, we don't own everything anymore, and for people who are Rust nerds, it's a hard pill to swallow to finally come to the situation where you are not the absolute owner of every piece of memory out there. It's a caveat that I think is acceptable in these cases where we have to interface with low-level subsystems, such as the system web view.
If something changes and it becomes suddenly easier through work being done by people in the Rust, Android, or iOS ecosystems, we'd absolutely consider revisiting that and making alternative approaches available to people.
I'm just looking at the haptic plugin here, and it's really not that much. It makes sense where there's a source haptic plugin. There are about 100 lines of Swift in there and maybe another 20 lines to register the plugin via Rust. Then, you can call that thing from JavaScript. That's pretty neat! I'll link up the example in the show notes.
I also have to say that the mobile ecosystem has a couple of decades' head start on Tori, and we are not a company with a thousand employees and billions of dollars of funding. The community is based on people who want to help each other. We knew that we would never be able to hit the full surface area of every single type of plugin that everybody wants out of the gate, and so the approach we took was...
The strength of a community lies in its willingness to help each other grow and innovate together.
I am currently examining the haptic plugin, and it’s quite straightforward. It consists of Swift code, with around 100 lines of Swift and an additional 20 lines to register the plugin via Rust. This allows you to call the plugin from JavaScript, which I find pretty neat. I will link the example in the show notes.
Additionally, I must mention that the mobile ecosystem has a couple of decades of head start on Tari. We are not a company with a thousand employees or billions of dollars in funding. Our community is built on individuals who genuinely want to help each other. We understood that we would never be able to cover the full surface area of every type of plugin that everyone desires right from the start.
The approach we adopted, which was approved by our auditors from Radically Open Security in the Netherlands, involved a review of our T 2.0 via funding from the European Commission. They agreed to evaluate several types of plugins for veracity, simplicity, readability, and security. Providing an auditing firm with 150 different plugins to review would require a significant amount of time, even with a team of four or five people. There simply aren’t that many experts in the security field working in the same place, which makes it a logistical nightmare.
To simplify the process, we aimed to make it so straightforward that even community members could contribute. For instance, someone in the community mentioned they were looking for a plugin, went over to Capacitor, found a plugin there, and figured out how to write it for Tari because they were skilled Rust engineers. This understanding of how plugins work is beneficial, as it opens up a market for other companies to get involved and create plugins for their customers. Perhaps one out of ten of these plugins gets contributed back upstream to the open-source community.
I have found the Tari ecosystem to be vibrant, particularly in terms of community support. Whether it’s on Discord or GitHub, whenever I seek help, there is always someone ready to assist. There is a lot of activity, with people engaging in discussions and providing support to one another. This positive aspect is crucial when selecting a platform; you certainly don’t want to choose something that resembles a ghost town. I have experienced some less-than-friendly web communities before, but Tari is definitely not one of them. I truly appreciate how everyone collaborates.
I am curious about the larger apps that have been developed using Tari. Are you aware of most of the apps that have been shipped with Tari, or are new ones emerging frequently? I have two direct comments on this. First, we do not run telemetry; that was a conscious choice. Perhaps we could add something optional in the future. However, I do know that, for example, SourceCraft and Cody are written in Tari. Additionally, Git Butler has chosen Tari for their platform. If you watch Scott Chassan, he often includes a slide in his talks showcasing Tari. I even sat next to him at a conference while he was coding in Rust and wrestling with the compiler, but he ultimately prevailed.
You can find apps developed with Tari in two places within the community. We have an excellent Tari repository where individuals can showcase their private and proprietary projects, using it as a platform to spread the word. For more interaction or feedback, many people utilize the Tari Discord, where we have a showcase. You can present your app and provide updates to users, who can then give feedback that bubbles up to the top of the showcase. This is a great way to engage with the Tari community.
If I were to categorize Tari apps into genres, I would say that those who are not deterred by an early adopter tax are primarily Dev tools developers. They are creating tools to enhance their development experience. I have also seen numerous teamwork apps being built with Tari. Moreover, a unique trend is the creation of add-ons for Twitch streamers. For instance, there was a plugin that recently got updated, which converts voice to text and displays the text on the screen. This showcases the versatility and potential of the Tari platform.
If you're willing to put in the time, learning to code isn't just for the experts—it's for anyone ready to embrace the challenge.
In the context of engaging with the Tower community, feedback plays a crucial role as it can bubble up to the top of the Showcase. This allows users to start interacting with the community effectively. If I were to categorize Tower apps into genres, I would say that Dev tools are particularly appealing to those who are not afraid of an early adopter tax. These tools are primarily built to enhance the development experience. I've observed a significant number of teamwork apps being created, indicating that the productivity market is where many young startups are sharpening their skills and entering the fray.
Another interesting category is add-ons for Twitch streamers. I've seen numerous applications developed for this purpose, including one that recently received an update. This app converts voice to text and displays the text on the screen, showcasing the innovative use of technology in streaming. Additionally, while games are popular, the focus is not necessarily on the games themselves but rather on the launchers that developers create. These launchers serve multiple functions, such as managing assets, handling version control, and facilitating downloads. This is particularly important for game studios that may have multiple titles, as it helps keep players engaged and informed about new releases or updates.
The app ecosystem is vast and impressive, with a massive list of applications available, including UB readers, screen recorders, visualizers, and keyboard-driven database management tools. It's quite enjoyable to explore this extensive collection, especially since the platform marks which apps are paid or closed source. This feature is intriguing for those looking to run a business or develop a small app that they can charge for, even if it's just six dollars.
From my perspective, even for someone who doesn't write Rust, it serves as a good introduction to the language. As we've discussed, you don't need to dive too deeply into it. Personally, I am not a Rust engineer, yet I was able to ship some substantial projects just by getting started. Building an app is genuinely enjoyable. My partner's daughter, who is 24 and recently graduated from college, expressed interest in learning Rust despite having no prior programming experience. I introduced her to the Rustlings course, which is an excellent way to grasp the basics. She has been watching the video solutions and, interestingly, has been participating in weekly challenges published by Cassidy Williams.
One of the challenges we tackled involved understanding time and log lines, specifically calculating the elapsed time between jobs. For Cheyenne, this was significant since she had never been exposed to a production environment before, though she has now started signing her commits. When we decided to present our work to Cassidy, we compiled it using Wasm and executed the Wasm blob in the browser via a GitHub page. During our discussion about performance, I mentioned that Rust is generally faster than JavaScript. Cheyenne was skeptical, so we decided to test it. She copied her Rust function into ChatGPT and converted it to JavaScript. We then integrated it into our project, and the results were clear: Rust outperformed JavaScript every time.
However, we encountered an issue when she used an i32 type, which has limitations on the size of numbers it can handle. This led us to confront the concept of BigInt and the challenges associated with handling large numbers. The reason I bring this up is that if someone with no formal computer science background can complete coding challenges in just three weeks while learning about JavaScript simultaneously, there truly is no excuse for anyone to avoid learning, aside from being unwilling or preoccupied with other commitments. It’s not difficult if you dedicate the time. Honestly, the three of us wouldn't be here today if we had chosen to be lazy and not invest our efforts into learning. Furthermore, I've found that writing Rust has even improved my skills as a TypeScript developer, as there are many similarities between the two languages, especially for those not coming from a traditional programming background.
If you commit to learning and put in the effort, you can achieve more than you think, regardless of your background.
In a recent discussion, it was noted that the use of an i32 type can lead to limitations, particularly when confronted with big integers. This situation highlights the accessibility of programming; for instance, if someone who has never studied computer science can start completing CAD news challenges in just three weeks while also learning JavaScript, it suggests that there's really no excuse for those who are unwilling or too busy to engage with programming. The three participants in the conversation emphasized that they wouldn't be where they are today if they had chosen to be lazy and not invest the necessary time.
One participant shared their experience, stating, "writing Rest even made me a better TypeScript developer." They pointed out the similarities between different programming languages, especially for those not primarily coming from typed languages. The conversation then shifted to the future of TAR, specifically regarding the anticipated release of version 2.0. It was mentioned that T 2.0 is scheduled for a stable release in about two weeks, marking a significant milestone in the T ecosystem. The phases of software development were explained: an alpha phase involves experimentation, a beta phase indicates that the architecture has been decided, and the release candidate (RC) phase is focused on bug fixing and documentation.
As of the date of the discussion, August 27th, the expectation was set for a stable launch in September. Looking ahead, the topic of T 3.0 was introduced, with one participant noting the challenges faced by highly motivated engineers who feel constrained by documentation tasks. They emphasized the importance of maintaining a structured release process, which occurs approximately every two years, allowing for collaboration with other teams on significant projects.
The conversation also touched on the increasing engagement in the mobile ecosystem, particularly among individuals like Cheyenne, who have innovative ideas but may lack the means to fully realize them. The role of AI tooling in this context was highlighted as an unstoppable force. The participants acknowledged that much of the code executed within the context of a tower app will likely be written by large language models (LLMs), either trained by humans or slightly modified.
There was a recognition of the challenges faced by absolute beginners, who may not understand fundamental concepts like tabs and spaces or the importance of linting. However, the participants expressed a commitment to finding ways to support these individuals, emphasizing that the transition to 3.0 would focus on fixing issues encountered along the way. They view this as a maintenance release aimed at enhancing the developer experience for everyone, regardless of their background or programming language.
In conclusion, the participants conveyed a belief that software engineering has the potential to change people for the better, provided they understand what they are doing. While they refrained from delving into a philosophical discussion, there was a shared sentiment that achieving a sense of accomplishment at the end of the development process is profoundly rewarding.
Innovation is essential; without it, even the best projects risk fading into irrelevance.
Understanding is not an excuse big enough to prevent us from finding concrete ways to help them. I would see the transition to 3.0 as being one of those experiences where we fix things that have been us along the way. We see it as a maintenance release, but we work really hard on improving the developer experience for everybody, no matter who they are, where they're coming from, or what programming language they're using. Our goal is to turn it into this very utilitarian agnostic framework that is beneficial for all.
I guess it sounds a bit cliché, but I really believe that software engineering has a way of changing people, and it can change them for the better if they understand what it is that they're doing. I'm not going to go on a philosophical rant right now, but there’s something that feels good at the end of the day when your Rust code compiles. You commit your code to your version control system, and you come back the next day, and it’s still working.
The risk for a project like T is something that was made really clear to me back in the days right before we launched 1.0 stable. That was the week that Explorer was deprecated, and the announcement of the deprecation of Adam Shell came just a week before 1.0 stable was released. I think I talk about this almost at every major release because it still sits with me: if you do not continue to innovate, your project will at some point disappear in relevance. This is tempered with the desire of a Rust engineer, of which I count myself among, to have something finished.
I think the notion of being done is something that you can achieve with a compiled language, but it’s hard to do with JavaScript. For example, consider try-catch: you just don’t know. Why is that? In Rust, to get to that kind of behavior, you’ve got to work really hard. The question we’re going to ask ourselves as we get to 3.0 is: what does done look like for T, and how can we keep on innovating?
As I mentioned earlier, the momentum to build a web view good enough for everybody but designed for T is a huge lift. I think that’s something that’s going to carry on for a decade, probably. I hinted at it, but I believe that the interoperability with other programming languages like Python, PHP, and script-based languages—whether it’s TypeScript or any other scripting languages—opens up the venue for everybody. You don’t have to be a Rust station to even touch it at that point.
That’s about as far out the window of this train that I can lean. I love what you said about the concept of done; it’s something that we in the JavaScript world don’t have. If we see a package that hasn’t seen an update in six months, we’re like, “Is this abandoned?” In Tori, or even in Rust in general, I can’t tell you the amount of times I’ve seen people ask about abandoned packages, and they’re like, “What do you mean abandoned? It’s done.” I’ve never even thought about that in the web world, so it’s an interesting concept that is so far removed from the web world.
Let’s move into the last section here. We have a sick pick and a shameless plug. Did you bring a sick pick for us? My pick would be 5-Second Films. It’s a troop of comedians who’ve made short films for almost 20 years. I have one in my watch view from 13 years ago; it said I watched this at some point in the past 13 years. What I love about these short films is that they force you as a viewer to come to terms with a situation you weren’t prepared for, really quickly find humor in it, and move on.
In the world I live in—software engineering—we often get stuck with these ideas in our heads, and it’s hard to flush them. I think this group does a really great job. I’m not an affiliate, but I encourage you to support them on Patreon; it’s a good thing.
Now, what about a shameless plug? I’m actually writing a book on the weekends. I got involved in the discussion around the Cyber Resilience Act in Europe last year when everybody was going crazy about how open source was going to have to turn into a product. As time went on, I had more conversations, was on panels, held talks, and got my notes together. I decided that it’s an important opportunity, especially in the weeks before this act gets published in Europe, to create a guide for people just building software.
The Cyber Resilience Act is all about the notion that products with digital elements are now going to be regulated in Europe. Sure, there will be a few years to get into compliance, but what it does to the notion of product—which I love—and engineering—which I need—are intertwined in these complex documents that go back 15 years. In one document, they’ll say one word, and everyone knows exactly what that word means if they’ve been around since 2012 and have been reading all these things. It’s such a behemoth of a topic that I thought I would take some mystery out of it and polish up my draft.
I already have the ISBN numbers, and I’m just waiting to publish it. What’s it going to be called? Manufacturing European Software. Awesome! Can people find that anywhere? It’s going to be on Amazon, on my corporate website, and I’ll be promoting it on LinkedIn and at conferences. I have four ISBNs, and I’m also printing physical copies, so if you see me somewhere, I’ll always have a couple of copies with me.
It’s been a really exciting journey, especially because the topic is so underserved, and there’s so much FUD out there. The act has changed over the past three years; things that people wrote about it two years ago are just wrong—literally wrong—because this thing has been evolving. So, that’s my shameless plug: I’m writing a book.
Cool! Well, thank you so much. This has been incredible. It’s been great to hear about Tori, but not only that, just all your amazing thoughts and everything. Thank you so much; this has just been really enlightening. If you haven’t checked out Tori, we’ll post all the links to everything that we talked about in these show notes. I highly recommend giving it a try; it’s an incredible piece of software and a great platform.
Thank you so much, Danielle, for coming on!
Thanks for having me; it was a pleasure.