Improved UI and extended functionality

Hello Coders! 👾

In the last few weeks, I made some great updates to my AI assistant Rosie.

Rosie Core

Loading old conversations

I added functionality to load old conversations. Every conversation has an ID. Based on that ID a json file with all contents is saved after every response from Azure OpenAI. Whenever I select a previous file, it is loaded as if that was the conversation that was happening. This is a feature I wanted to have for a VERY long time.

Copy Responses

Another thing that was on my wish list was to be able to copy a response. And not by selecting the text, but the original. Most responses use markdown. The markdown is rendered, so headers, lists and code blocks are displayed correctly. But often I want to copy the result to another application and I want to keep the markdown. I added a button to do just that. I also added a button to copy the code, but since that has to be hooked into the rendering of code, it is dependent on change events of the DOM and for some reason, these do not always trigger.

Fixed TTS and speech recognition

For a while the text to speech and speech recognition were broken. Since I’m trying to get back into streaming the TTS and STT should be working again. And now it does. To make the responses better I also added prompt additions.

Prompt additions

I added functionality to add a different piece of prompt from the UI. This dropdown now shows 2 options, one for normal and one for speech. Normally I want to have the default response preferably with markdown, but during TTS I want to have short, spoken responses. I might add more of these flexible extensions in the future if I need them.

UI improvements

  • I added an expander component to hide some lesser-used options and input boxes in the UI.
  • I made some changes to the UI and removed the old background. Now the installer is working correctly and I can run Rosie without VSCode.
  • I also updated a few assets, styles and a couple of packages.

A new Face.

I thought it was about time to update the face of Rosie as well. This is what she generated herself.

Rosie Agents

I’ve made some great improvements on the Azure functions part as well.

Semantic Kernel

I updated Semantic Kernel to the latest preview to get the latest planner functionality. This is using HandleBars, and with this latest preview, it is possible to add a custom planner prompt. I’m not completely happy with the results though. Often the planner seems to get into an endless loop. But it is very cool to see that you can ask a ‘simple’ thing and have the planner create a plan that calls out to all kinds of plugins. For example, from a simple prompt, I can have the AI get a transcript from a live stream, get the highlights, turn it into an outline for a blogpost and create a devlog from that. Almost. The results of the planner vary, a lot and I might switch back to a simpler structure.

HuggingFace

I was not happy with the results from Dall-e. So I decided to add a plugin to the Azure Functions project to generate images using HuggingFace. At the moment I have 5 different models implemented. And it works great! The only downside is that sometimes it might take a while for the HuggingFace server to respond if you get it at all. But, it’s free so I don’t complain.

.NET 8

I got complaints from Azure that I needed to update to .NET 8, so I did.

Memory

I added plugins to get data from memory. I created very basic tools to create embeddings from markdown files using the Ada model on Azure AI. I loaded these into Azure AI search. If needed I can now ask questions about WebXR, Wonderland, Semantic Kernel and relevant Azure functionality.

I also experimented a bit with creating a fine-tuned GPT3.5 Turbo model based on my blogposts to get the tone of my posts consistent. I didn’t like the results and since running the model on Azure is very expensive I deleted the deployment. But, I did write a blogpost about creating embeddings.

Plan for the near future?

At the moment my biggest wish is better error handling. It sometimes happens that a service gives an error. This is not handled in the UI yet, so it just stays silent. The only way to check now is to open the dev tools. I want to have these in the UI.

But, maybe the biggest feature I want to add is to have Rosie respond on external events. For example a tweet, a PR on a GitHub repo or an email. I think I am going to use Microsoft PowerAutomation to create triggers and push these into an Azure Storage Queue, so I can create an Azure function to respond to those and have Rosie handle it.

so, Stay tuned for more :)

Happy Coding! 🚀