Auto-synced from project changelogs and manual site updates. Updated 2026-05-06
2026-05-06Workflow Orchestrator
Live
Workflow Orchestrator: chain any tool into a saved, schedulable pipeline
The Workflow Orchestrator is now front-and-centre on JAD Apps. Chain any of the 236 browser-native tools — converters, cleaners, anonymisers, composers — into a saved, schedulable workflow that runs end-to-end without uploads or glue code. Start from a curated template (Monthly Financial Pack, Image Batch → CDN, Podcast Publish Pipeline, and more) or build your own from scratch with drag-and-drop steps. Every workflow keeps the data factory's privacy guarantee: file contents stay in your browser.
2026-05-02Data Factory
Live
JAD Apps relaunches as a global data factory
The site has been rebuilt around a privacy-first data factory: direct CSV tools, a separate guides hub for workflow-specific landing pages, refreshed studio and news sections, account-backed usage dashboards, and Stripe-powered Pro upgrades. File contents stay in the browser while the factory now sits alongside the broader JAD Apps studio portfolio.
2026-05-01ECSIE
Coming Soon
New research project: next-generation AI inference efficiency
Exploring novel techniques in AI model quantization and inference-time efficiency for small-scale architectures. Early-stage research — no further details at this time. Follow for updates.
2026-04-25SmrtPad
In Development
SmrtPad v0.9.5 — SmrtDoodle bridge integration and SmrtAI.Core shared library
v0.9.5 ships the SmrtDoodle bridge alongside a new SmrtAI.Core shared class library that packages IAIDispatcher, AIDispatcherAvailability, SemanticSearchModels, the SmrtDoodle IPC contract, and the SmrtDoodleFrame length-prefixed JSON frame serialiser for reuse across SmrtPad, SmrtPad.AI, and SmrtDoodle. SmrtDoodleIpcService is fully rewritten to protocol activation (smrtdoodle://edit?pipe=…&v=1) with a per-session named pipe — it optionally sends a source PNG frame, awaits the result or a cancel signal, and returns the decoded PNG to the document. If SmrtDoodle is not installed the button opens the Microsoft Store search page. A return-image dialog lets users choose Replace selection or Insert as new image. Package version bumped from 0.6.0.0 to 0.9.5.0.
2026-04-25SmrtDoodle
In Development
SmrtDoodle v0.9.5 — SmrtPad bridge via protocol activation and named-pipe IPC
v0.9.5 adds the SmrtPad bridge: SmrtDoodle now supports protocol activation via smrtdoodle://edit?pipe=…&v=1. App.xaml.cs detects the scheme on launch, parses the query string, and starts a SmrtPadBridgeSession before the main window opens. The bridge connects to a per-session named pipe, receives an optional source PNG to pre-load on the canvas, and sends the composited drawing back when the user clicks Insert into SmrtPad. Closing without inserting sends a cancel frame so SmrtPad can clean up. SmrtDoodle.csproj now references the shared SmrtAI.Core library for the IPC contract and SmrtDoodleFrame serialiser. Package version bumped from 0.7.5.0 to 0.9.5.0.
2026-04-24SmrtPad
In Development
SmrtPad v0.9.0 — Skill-aware sampling, think/no-think benchmark matrix, and model prompt policy tests
v0.9.0 introduces a SkillContext ambient carrier that drives precision sampling (Temperature 0.3, TopP 0.85, Seed 42) for autocomplete and OCR skills to reduce stochastic variance. All prompt templates are rewritten model-neutral with explicit think-tag suppression and a consistent output contract. The benchmark matrix now covers both NoThink and Think reasoning modes across every GGUF model on both GPU and CPU paths; BenchmarkResult, BenchmarkRun, and the live HTML dashboard all surface ReasoningTag per row. New ModelPromptPolicyTests cover SupportsThinkingMode, GetModelDirective, NormalizeMode, and BuildSystemPrompt for all model families. Chat-template turn markers are scrubbed from token streams before scoring to prevent leakage into output.
2026-04-22MarkUp Markdown Editor
Released
MarkUp Markdown Editor v1.7.0 — Rich-text paste from browsers and Office
v1.7.0 adds full HTML-to-Markdown paste support: pasting from any browser, Microsoft Word, or Outlook automatically strips Word/Office noise (namespace tags, mso-* styles, conditional comments, lang attributes) and converts the content to clean Markdown. Headings, bold, italic, tables, ordered and unordered lists, code blocks, blockquotes, and links all convert correctly. Paste works in both the editor pane and the contentEditable preview pane. 37 new HtmlToMarkdownConverter unit tests cover Word HTML stripping, CF_HTML fragment extraction, combined bold+italic spans, bullet glyph removal, and empty heading suppression; 2 new DocumentLoad tests and 2 new UI tests verify the paste and file-open workflows end-to-end.
2026-04-16SmrtPad
In Development
SmrtPad — Reasoning mode across inference, benchmarks, and the Smart Sidebar
Reasoning mode support lands across the entire inference and benchmark stack. ReasoningTag (NoThink or Think) is added to BenchmarkResult, BenchmarkRun, and BenchmarkRunner; every result and run carries the tag, the live HTML dashboard surfaces it per row, and the Markdown report emits a Mode column in both per-run headers and multi-model comparison tables. Reasoning mode is wired through AIDispatcherFactory.CreateFromLocalPath() and ConcreteLlamaSharpModelAdapter so the local-path factory honours the dispatcher's PreferredReasoningMode setting when constructing the adapter. The local GGUF benchmark matrix expands to cover both NoThink and Think modes per model across GPU and CPU targets, with GPU runs executing before CPU runs for thinking-capable models. A reasoning mode selector (Think / No Think radio items) is added to the Smart Sidebar options flyout so users can switch modes from within the document UI.
2026-04-15SmrtPad
In Development
SmrtPad — LLamaSharp GGUF runner, 12 benchmark models, and full CUDA acceleration
LLamaSharp GGUF inference is added to SmrtPad with a 12-model benchmark catalog and full CUDA GPU acceleration. ConcreteLlamaSharpModelAdapter resolves and pre-loads the CUDA DLL chain in dependency order (libomp140 → cudart64_12 → cublasLt64_12 → cublas64_12 → ggml-base → ggml), auto-copies the CUDA runtime to the expected path, and enables FlashAttention with an explicit NativeLibraryConfig and a larger batch size — achieving 192 tokens/s on a 1B GPU model. The benchmark project gains RID targets and LLamaSharp backend package references so cuda12 native DLLs land correctly in the test output directory. EnsureGgmlBackendsLoaded() is extracted as a standalone method with diagnostic logging for missing optional dependencies.
2026-04-14SmrtPad
In Development
SmrtPad — ORT GenAI in-process inference, live benchmark dashboard, and SmrtDoodle IPC
Three foundational infrastructure changes land together. Microsoft Foundry Local is replaced by Microsoft.ML.OnnxRuntimeGenAI (CUDA 0.12.2) for in-process ONNX model inference with GPU acceleration and CPU fallback; the new ConcreteOrtGenAiModelAdapter handles streaming, automatic chat-template detection for phi/qwen/deepseek families, and CUDA runtime library resolution from %USERPROFILE%\.SmrtPad\ep\cuda-ep\. A local model benchmark test (LocalModelBenchmarkTests) scans directories for ONNX GenAI models and benchmarks them with a live HTML dashboard that polls every 5 seconds, shows progress, pass rate, average score, and per-case result rows, and opens automatically in the default browser. A named-pipe IPC service (SmrtDoodleIpcService) is added to launch SmrtDoodle, await its exit, and retrieve the drawing result for insertion into the active document.
2026-04-13LocAI
In Development
LocAI v0.3.1 — CUDA GPU smoke tests and full changelog documentation
v0.3.1 adds CUDA GPU inference smoke tests (test_gpu.py and test_qwen3_gpu.py) for verifying llama-cpp-python picks up local CUDA DLLs and can run model inference correctly. The changelog has been fully populated with versioned entries covering all 169 commits from v0.1.0 to v0.3.0, and the README is rewritten to reflect v0.3.0 feature state with a feature status table, build instructions, and compatibility taxonomy.
2026-04-12LocAI
In Development
LocAI v0.3.0 — Web search, chat history management, and conversation compaction
v0.3.0 integrates DuckDuckGo web search directly into chat (free, keyless, no account required), adds full chat history management with clear/review/resume support, and introduces automatic conversation compaction — when the context window fills, older messages are summarised rather than truncating the model's response. Context window size is now configurable from the UI and forwarded directly to the running worker.
2026-04-12LocAI
In Development
LocAI v0.2.8 — Markdown rendering, inference stats bar, and CUDA acceleration
v0.2.8 lands three major chat improvements: assistant messages now render full GitHub Flavoured Markdown including tables, code blocks, and syntax highlighting; an inference stats bar shows time-to-first-token, tokens/s, input/output token counts, and context window usage; and llama-cpp-python now runs with CUDA acceleration with n_threads, n_batch, flash_attn, and mmap parameters exposed. Stop tokens are applied per model family to prevent end-of-turn tokens leaking into responses.
2026-04-12LocAI
In Development
LocAI v0.2.7 — HuggingFace model browser, Downloads screen, and live stub worker
v0.2.7 adds a HuggingFace model browser for searching the Hub, browsing available quantisations, and downloading GGUF files with live progress tracking. A dedicated Downloads screen groups quantisations by family with parameter-count sort. A dev/demo stub worker runs automatically when no model files are present so the app can be explored without a model download — the broker replaces it automatically once real model files appear.
2026-04-11LocAI
In Development
LocAI v0.2.3 — Full i18n, plugin sandbox, complete worker suite, and CI/CD pipeline
v0.2.3 ships full internationalisation across all UI components with a French locale and in-app language selector, a plugin sandbox with capability isolation and curated catalogue, a channel-aware updater with SHA-256 integrity verification and real repair logic, and complete worker implementations — TTS fixed-voice, WinML ONNX image inference, HuggingFace streaming text generation, and Stable Diffusion img2img/inpaint. The CI/CD pipeline and automated release workflow are also in place.
2026-04-11LocAI
In Development
LocAI v0.2.2 — Ed25519 activation, SQLite persistence, catalog REST API, and 289-key i18n
v0.2.2 replaces stub activation with real Ed25519 cryptography with secrets stored in the OS keychain. Persistent storage lands with SQLite conversation history, TOML config, and file-backed profile storage. Catalog, image, audio, and profile management REST endpoints are added. A real catalog index is published with four pack manifests (text starter, image add-on, voice pack, creative audio), and a 289-key en.json locale file provides complete UI internationalisation coverage.
2026-04-09LocAI
In Development
LocAI development begins — local-first multimodal AI desktop suite
LocAI has entered active development. The Tauri 2 + Rust cross-platform desktop app delivers text, image, and audio generation entirely on local hardware — no cloud, no account. The initial bootstrap includes a full UI scaffold (model catalog, text/image/voice studios, plugin catalog, profiles, settings), cross-platform packaging pipelines (Windows/macOS/Linux), a channel-aware updater, production activation UX, and comprehensive test infrastructure spanning core contracts, accessibility, plugin signing, API security, and benchmark baselines.
2026-04-08SmrtPad
In Development
SmrtPad v0.8.0 — ORT GenAI in-process inference, live benchmark dashboard, and .NET 8 migration
v0.8.0 replaces Microsoft Foundry Local with direct Microsoft.ML.OnnxRuntimeGenAI.Cuda 0.12.2 for in-process ONNX model inference; the new ConcreteOrtGenAiModelAdapter supports streaming, automatic chat-template detection for phi/qwen/deepseek families, and CUDA runtime library resolution from %USERPROFILE%\.SmrtPad\ep\cuda-ep\. ModelDownloadService downloads ONNX GenAI model files from HuggingFace Hub with live Smart Sidebar progress integration, and AIDispatcherFactory.CreateFromLocalPath() provides a public API for direct local-model loading used by the benchmark. LocalModelBenchmarkTests scans for ONNX GenAI models and runs them with a live HTML dashboard that polls every 5 seconds and opens automatically in the browser. SmrtDoodleIpcService adds named-pipe IPC for drawing integration. The entire solution migrates from .NET 10 preview to .NET 8 across all 7 projects, CI workflow, and deploy scripts; all Foundry/FoundryLocal references are renamed to OnnxRuntime/Gpu throughout the codebase.
Inklet v1.0.5 — App icon taskbar rendering and About dialog polish
v1.0.5 fixes the app icon missing from the taskbar (SetIcon was called with a .png file; a multi-resolution .ico with 256/48/32/16 px layers is now generated and set via SetWindowIcon) and corrects the About dialog copyright year from hardcoded 2025 to DateTime.Now.Year.
Inklet v1.0.4 — Store launch crash and window size on maximise fixed
v1.0.4 resolves an immediate launch crash from the Microsoft Store (REGDB_E_CLASSNOTREG) caused by the WAP project missing a PackageDependency for the Windows App SDK runtime — fixed by adding Microsoft.WindowsAppSDK with IncludeAssets=build. Also fixes window reopening at screen-overflow dimensions when it had been closed while maximised.
Inklet v1.0.2 — Session data file storage, binary file detection, and cursor persistence
v1.0.2 moves persisted tab JSON from ApplicationDataContainer (limited to 8 KB) to a file in LocalFolder so sessions of any size are preserved reliably. Binary file detection now shows a confirmation dialog when opening known binary formats or files containing NUL bytes. Unmodified file-backed tabs no longer store full content in the session file — they are reloaded from disk on next launch.
Inklet launches on GitHub and submitted to the Microsoft Store
Inklet is now public on GitHub and has been submitted to the Microsoft Store for certification. The lightweight WinUI 3 Notepad clone ships with multi-tab editing, full session persistence, advanced encoding detection, and the complete classic Notepad feature set.
Inklet v1.0.0 — Print dialog crash fixes and Store certification submission
v1.0.0 resolves two AccessViolationException crashes in the print dialog: a null lpPageRanges pointer (Win32 contract requires a one-element buffer even with PD_NOPAGENUMS) and PrintDlgEx being invoked from an MTA ThreadPool thread (now runs on a dedicated STA thread via TaskCompletionSource). Default font size changed from 14 pt to 12 pt. Submitted to the Microsoft Store for certification.
Inklet v0.9.0 — Full session persistence with silent save-on-close
v0.9.0 introduces complete session persistence: all tab state (content, cursor, encoding, BOM, and line endings) is saved automatically on close with no save dialogs. Untitled tabs with unsaved content and saved files with in-progress edits are both fully restored on next launch. The new PersistedTabData record handles structured JSON serialisation of tab snapshots, replacing the previous file-path-only SessionFilePaths setting.
Inklet v0.8.0 — Multi-tab editing with session memory
v0.8.0 brings full multi-tab editing via WinUI 3 TabView: tabs are closable, reorderable, and draggable; Ctrl+T opens a new tab; the tab header shows a * prefix for unsaved changes; closing the last tab resets it rather than exiting. Basic session memory restores open file paths and the active tab index on next launch. File › Open and drag-and-drop reuse a clean untitled tab rather than always opening a new one.
Inklet v0.7.0 — Initial release: full Notepad feature set, encoding detection, and 58 unit tests
v0.7.0 delivers the complete Notepad experience: New/Open/Save/Print, Find & Replace (match-case, wrap-around), Go To Line, time/date stamp, word wrap, font picker, drag-and-drop, command-line file argument support, and zoom 25%–500%. Encoding detection covers UTF-8, UTF-16 LE/BE, ANSI, and international code pages with BOM and CRLF/LF/CR round-trip preservation shown in the status bar. 58 unit tests span four suites: EncodingDetectorTests, FileServiceTests, LineEndingTests, and DocumentStateTests.
2026-04-07SmrtDoodle
In Development
SmrtDoodle v0.7.5 — UI test suite fully operational (409 passing)
The Appium/WinAppDriver UI test infrastructure has been rewritten for local execution. All suites are now green with 409 passing tests and 26 intentionally skipped for known WinUI 3 UIA limitations (controls without automation peers). The app also switches to unpackaged self-contained deployment for easier testing.
2026-04-07SmrtPad
In Development
SmrtPad v0.7.5 — Refined model catalog, full cross-platform benchmarks, and i18n fixes
v0.7.5 narrows the AI model catalog to ≤10B parameter models with GPU/CPU-only sentinels, introduces target-aware model menus that correctly filter aliases per execution path (NPU/GPU/CPU), expands benchmark coverage from ~1,095 to 1,971 per-run evaluations with JSONL response logging, and fixes 7 missing Smart Sidebar localisation keys across 8 satellite locales.
2026-04-07SmrtPad
In Development
SmrtPad v0.7.5 — `CpuOnly` / `GpuOnly` sentinel constants
`CpuOnly` / `GpuOnly` sentinel constants
2026-04-06SmrtDoodle
In Development
SmrtDoodle v0.7.0 — AI tools, accessibility, 8-language localisation, and tiled rendering
A major feature push brings SmrtDoodle to near-Store-ready state: an AI Tools submenu (Remove Background, Upscale, Content-Aware Fill, Auto-Colorize, Style Transfer, Noise Reduction) gated behind a Pro licence; full accessibility with AccessKey bindings, UIA landmarks, and high contrast theme support; 8-language localisation including RTL Arabic; tiled 8K canvas rendering with DirtyRectTracker; and 538 passing unit tests across all subsystems.
2026-04-06SmrtPad
In Development
SmrtPad: AI skill prompts rewritten for consistent tag compliance and clean output
All 8 edit-skill prompts and the FreeformChat prompt have been rewritten to address tag compliance failures (previously 40%) and unwanted sign-off phrases identified in benchmark results. The FreeformChat prompt now correctly routes document requests through the <insert> tag path while keeping Q&A replies as plain text without tags.
2026-04-06SmrtPad
In Development
SmrtPad: Benchmarks: add GPU-only test and per-case timeout
Benchmarks: add GPU-only test and per-case timeout
2026-04-04SmrtPad
In Development
SmrtPad v0.7.0 — Smart Sidebar full-response streaming
Smart Sidebar full-response streaming
2026-04-03SmrtDoodle
In Development
SmrtDoodle launches on GitHub — WinUI 3 drawing app with SmrtPad integration
SmrtDoodle is now public on GitHub. The lightweight WinUI 3 and Win2D drawing application ships with a full tool suite, 8 brush styles, a shape library, layer management, and native IPC integration with SmrtPad for inserting drawings directly into documents.
2026-04-03SmrtDoodle
In Development
SmrtDoodle v0.1.0 — Complete drawing foundation with SmrtPad IPC
v0.1.0 delivers the full drawing foundation: pencil, brush, eraser, line, curve, shapes, flood fill, text, eyedropper, layer management, 50-step undo/redo, clipboard, file I/O (PNG/JPEG/BMP/GIF), canvas settings, grid/ruler, 30-colour palette, zoom, and SmrtPad IPC integration.
2026-04-02SmrtPad
In Development
SmrtPad: Foundry prerequisite check and Smart Sidebar automation status codes
SmrtPad now detects a missing Foundry Local installation before opening the AI sidebar, showing a guided setup dialog that distinguishes missing-install from dispatcher-init-failure. The Smart Sidebar also exposes stable automation status codes (PENDING, READY, PREREQ_FOUNDRY_MISSING) for reliable UI test assertions.
2026-04-02SmrtPad
In Development
SmrtPad: Remote AI model benchmark suite and Foundry deployment automation
A remote AI model benchmark test suite lands in SmrtPad, automating performance validation across NPU/GPU/CPU hardware targets on a dedicated Windows machine via WinRM. The Foundry deployment script now handles prerequisite installation via winget, direct download, and offline installer paths.
2026-04-01SmrtPad
In Development
SmrtPad: Use clipboard paste instead of SendKeys for reliable long-text injection…
Use clipboard paste instead of SendKeys for reliable long-text injection into RichEditBox
2026-04-01SmrtPad
In Development
SmrtPad: Use MaxContextTokens for forced-alias model to prevent prompt truncation
Use MaxContextTokens for forced-alias model to prevent prompt truncation
2026-03-30StoryArc
In Development
StoryArc v0.2.0 — Chapter controls, library management, and audio engine improvements
v0.2.0 introduces chapter selection and speed controls to the Now Playing page, a comprehensive right-click context menu for library management, a custom branded title bar, and significant fixes to M4B chapter parsing, multi-file audiobook handling, and Whisper transcription reliability.
JAD Apps studio site updated
The studio portfolio has been refreshed with the latest project data, changelog highlights, and direct Microsoft Store links.
2026-03-29SmrtPad
In Development
SmrtPad advances AI-integrated writing assistance
Active development is focused on on-device model initialisation, execution target selection (NPU/GPU/CPU), and expanded coverage across the editor's AI-driven features.
2026-03-29StoryArc
In Development
StoryArc architecture established for phased delivery
A structured solution layout and dependency baseline are in place to support accelerated development across chapter navigation, DSP audio, caption generation, and the core listening experience.
2026-03-27SmrtPad
In Development
SmrtPad v0.6.0 — Live AI initialization progress
Live AI initialization progress
2026-03-27SmrtPad
In Development
SmrtPad: Dropdown mojibake, insert tag extraction, stop-before-model-switch
Dropdown mojibake, insert tag extraction, stop-before-model-switch
2026-03-24MarkUp Markdown Editor
Released
MarkUp Markdown Editor v1.6.0 — Full bidirectional editing with toolbar formatting from the preview pane
Full bidirectional editing with toolbar formatting from the preview pane
2026-03-24MarkUp Markdown Editor
Released
MarkUp v1.6.0: Full bidirectional toolbar editing
v1.6.0 delivers complete toolbar formatting support from the preview pane, backed by a new MarkdownSelectionProjection engine that precisely maps visible to source offsets — making in-preview editing feel seamless.
2025-07-14StoryArc
In Development
StoryArc v0.2.0 — Chapter selector ComboBox pill to the right of transport controls on …
Chapter selector ComboBox pill to the right of transport controls on the Now Playing page; shows the current chapter and allows direct seeking