1 of 37

A small tool for �sketching UIs

2 of 37

A small tool for sketching UIs

Have an editor to quickly create simple UI-sketches, �usable e.g. for discussing how an interface should look like.

Jan writes:

how about we try this:

[Tool created sketch of UI]

Anna writes:

I think a vertical layout is better, like so:

[Tool created sketch of UI]

add comment

3 of 37

A small tool for sketching UIs

Rough sketch of what it might look like, showing the three tools/modes.

Select

New UI element

New annotation

4 of 37

A small tool for sketching UIs

It should be simple and compact to be embeddable in other tools (e.g. fossil, gitea, mattermost etc.)

Add a UI sketch!

5 of 37

Metaphor

  • If figma is for creating a blueprint of your UI, using a pens, stencils and squared paper
  • …then this is drawing with a sharpie on a sticky note

  • Collaborating in figma is like refining the blueprint,
  • Collaboration here is responding with a sticky-note sketch of our own, building upon their work.

6 of 37

RELEVANT FEATURES

7 of 37

Autogrouping: No grouping or stack management

  • When you drop a UI element on another UI element they become grouped; inner elements are always on top of outer ones; if you drag the outer element, all inner elements are dragged along.
  • This means: No stack management with „send back“, „send forward“ or „group“.

This actually works and has been done before:

8 of 37

Zoom to cursor, panning etc.

  • Basic interactions like zooming and panning should work smoothly – zoom needs to move the canvas in way that the same canvas-point stays under the cursor. Some apps don’t do that, which feels confusing.

9 of 37

Contextual UI

  • The UI should show contextually relevant actions at the elements that can be edited.
  • Usability test will be needed to find most important features and commands.
  • UI should be visible, it does not fake minimalism by hiding

�Prior art: A lot, but most closely to this idea: Whimsical uses such a contextual UI.

10 of 37

COLLABORATION MODEL

11 of 37

Collaboration model

  • The collaboration model assumes online collaboration in a thread- or stream-like structure, e.g. in issue trackers, discourse discussions, chats…
  • UI Sketches could be
    • created from scratch
    • copied from a previous post (“How about changing…”)

12 of 37

Collaboration model

  • It is easy to build upon one’s own or other user’s sketches: Just create a copy and make your changes
  • Building upon each other’s work is thus like responding in a conversation: you said… and I would add…

build upon this sketch

13 of 37

Collaboration model

Using a copy-and-build-upon model of collaboration, we gain the following advantages: �

  • This way, no hard-to-implement conflict resolvement is needed (E.g. Conflict-Free Replicated Data Types, Operational Transforms, or less complex, statemachine-replication).
  • History is easy to understand (the things you copied from are still there)
  • Combination with text (e.g. in comments or issues) is easy as sketches mostly behave like images or text once posted (editable by the author if needed, but mostly static)

14 of 37

ALTERNATIVES

15 of 37

Whimsical

  • Contextual UI
  • Autogrouping
  • Not open
  • Many different usecases
  • Live-collaboration tool
  • Not sketch-like looking

16 of 37

Balsamiq

  • Looks like a sketch (yey!)
  • Focussed on UIs
  • Similar target group – people collaborating around designs
  • Some functionality is clunky (no zoom to cursor)
  • No auto-grouping
  • Still too many features
  • not open

17 of 37

Draw.io

  • Open
  • Established codebase
  • embedded in some products via plugins already
  • Universal diagramming tool, no focus on UI
  • No auto-grouping

18 of 37

Excalidraw

  • General whiteboarding
  • No auto-grouping
  • I still need to figure out how their code works (some combination of react+canvas; node 18-20, yarn required to run)
  • Looks like a sketch
  • Open

19 of 37

IMPLEMENTATION: DETAILS FOR DEVS

More backend-y stuff I tried or thought about. It is written in the context of drawing apps, so I guess, just like understanding a web app usually needs knowledge of concepts like routers, requests, etc. it might be hard to grasp without knowledge of drawing apps.

20 of 37

Repository

It does only a part of it yet, but if you want to try:

https://github.com/jdittrich/quickwire

Expect it to barely look better than this!

21 of 37

OOP

  • Most tools I looked at seem to to avoid an MVC-like approach, instead they have figure objects with (sometimes rather elaborate) inheritance.
  • At the moment I wrap Backbone.js in my own classes (so the outside does not use backbones API) and use its models and view rendering, but not the view-controller part (I rather catch events and do my own hit testing to be able to deal with zoom and drag interactions)

22 of 37

Hotdraw, morphic and OOP

  • OOP seems to work well for this kind of applications

�Classic Examples:

  • hotdraw
    • A diagramming framework, originally implemented in smalltalk
    • There are some papers and discussions of architecture on hotdraw, making it relatively easy to understand
  • morphic
    • more focussed on the creation of UIs, also originally created in smalltalk
    • Some publications, but less than for hotdraw, afaic

23 of 37

Some approaches not used so far

  • Event-stream functional
    • I found it hard to wrap my head around and I did not find a lot of prior art.
  • vue or react-based
    • I know that some projects do this (e.g. excalidraw is based on react), so it can work
    • I had the impression that what these frameworks were good at and what interactive drawings required was not aligned
  • d3
    • it seems to be more aligned than vue/react
    • its still not a data visualization and I found no prior art
    • Zoom and drag/drop seems to work well, though

24 of 37

Canvas and figures

  • The sketch is displayed on a canvas, which can hold content: Figures
  • The canvas can be zoomed and panned
  • Figures have…
    • a position, width, height
    • A parent (another figure or the canvas)
    • Attributes like a text, if it should look active or passive etc. (these depend on the “subtype” of figure (Button, Textfield, Box, etc.; This can be implemented as an OOP subtype, but there are also other ways)

25 of 37

Layers of rendered elements

(Entirely internal, this is nothing the user deals with)

typical UI elements like „duplicate this element“-button or „select size“

handles: scaling, changing size etc.: little boxes to drag/drop

element previews, snaplines

document itself

26 of 37

Notes on language and tools

  • So far I used a toolchain-free setup:
    • Javascript with ES6 Modules
    • Unit tests via ES6 module imports and qunit-in-the-browser

  • Advantages are:
    • Easy to pick up development, only a simple http server is needed (e.g. the one coming with python)
    • Easy to debug in the browser, there are no sourcemaps or long call-trees.

27 of 37

Selection

  • The selection can be empty or have one or more figures
  • Most commands done would affect any object in the selection
  • Thus, instead of saying „use this command on these objects“ we will usually say „use this command on the selection“
  • Not sure if only selecting one element is sufficient, autogrouping might do away with much of the need to select several elements manually.

28 of 37

Tools

  • Tools determine what a certain input does.
  • …however, they are not the same as selectable tools in a sidebar
  • e.g. Pressing the mouse button on a handle of a selected figure activates the „resize“ tool; then, moving and releasing the mouse would commit the „resize“-command.
  • Tools have methods like „onmousedown“, „onmousemove“, „onkeypress“ etc.
  • The methods are called by a the toolManager that knows the active tool and can transition between them
  • OOP-pattern: State

29 of 37

Commands

  • Commands actually change the document
  • Commands are usually issued by tools
  • Commands are undoable
  • Like with the tools, there is a commandManager to call methods like do(new CommandType), undo() and redo(),

30 of 37

Open questions: Rendering

  • It seems the easiest to just render to HTML/CSS: All elements are boxes anyway.
  • Some tools render to SVG (Penpot, Whimsical) other to canvas (hotdraw, morphic)
  • For ease of sharing/displaying-as-image, rendering to a canvas/a png would probably make sense.
  • Details:
    • On canvas, the sketch-like look probably needs to use rough.js
    • I don’t know how to handle text on canvas
    • most implementations on canvas use a quad-tree to optimize rendering via „dirty“ flags and redrawing parts.

31 of 37

Open questions: Rendering 2

  • A quirk of rendering to HTML that I noted is that, whenever the stack order is changed (internal “to front” or “to back”, not called by the user), I also need to reflect this to the HTML by calling something like private toFrontHtmlRenderer().I guess this could later be solved by the view listening to events and then calling its own needed methods. Rendering on canvas would just redraw the “dirty” areas with the correct stacking (I guess there the complexity is in when areas are marked “dirty”)

32 of 37

Open questions: Overlays, previews

  • A drag or resize is „done“ when releasing the mouse-button, but needs to show feedback while dragging. So:
    • should I actually change the figure while dragging and, if cancelled, reset to a stashed initial state OR
    • Should only a preview overlay be dragged and the figures are changed when the command is committed. OR
    • Use Model/View-Model? (change only view-model, then commit to model)

33 of 37

Open questions: Overlays, previews

…via Transient mode: During drag, changes are transient

onDragStart(){

state.transientMode(true) …

onDragMove(){� selection.move(…�onDragEnd(){

state.transientMode(false)

commandManager.apply(…

…probably via Memento-Pattern on the complete state?

34 of 37

Open questions: Overlays, previews

Strategy: Dragging a copy, updating model on drop.

On Dragstart, make the original view invisible; create a copy that is rendered on top. Drag this copy. When dragging is done, destroy the copy and change the position of the original.��(Works pretty well in MVC, had problems in declarative-reactive paradigms)

35 of 37

Open questions: Abstracting auto grouping

Most models of graphics work based positions, manual grouping and stack management. Here, we would only have the user-issued position. Should figuring out the grouping happen: �

before I construct commands (so the commands themselves operate on the stack-grouping model; more deterministic)

after I issue the commands (so the commands just contain the positions, the grouping is part of the drawing-object behavior; more reflecting user action)�

…I tend to choose the latter one (but I will probably need escape hatches then)

36 of 37

Open questions: Storage format

  • Due to the nesting, XML might actually make sense…
  • …but JSON is fine, too.

  • Its good if it is human-readable, however, it is not primarily a plain-text-format or even a UI specification language

37 of 37

Theory: Why such a tool?