This is a comprehensive guide to the big idea.
For serving teams as a central design system.
To unify teams, guide developers, tutor users,
pitch meetings, and target marketing.
We should design before we code, but if we want to effectively reach the ideal we need to get this into the world and expose it to more use cases and ideas to complete the dream team that will complete the ideal interface. That means not solving every ideal but solving every foundational ideal.
Let’s review system designs that can make every Space Principle possible.
The universe is always changing and your Space Reactor’s job is to update the universe with the Mods you spawn. If you spawn a ball into the universe even if you didn’t originally create it, your reactor is running that ball. The universe therefore is highly parallel. If I subscribed for 10 trillion flops I could fill the universe with 16billion reactions per frame simultaneously. Everyone is running their own set of processors. That’s a lot of changes at the same time. Add up all the operators including you and we have a living scalable universe.
You can set up a space altering field that modifies the default Space Engine reactions. In affect you are changing the constants of the universe. This will appear as an anomaly so as you cross into it things will act differently.
As you cross the horizon you could be offered a Lens to update your biases to cooperate with this space. You can at any time change lenses but this fact will be logged so the rules you play by is apparent. You can use lenses to shift out of phase in a sense and try something different. For example you can remove all man made objects from earth and build upon a natural ground. People could switch and combine lenses to replay and revise the world. This also allows you to delete a protected something you don’t want from someone else's planet without affecting them, because you do so in your own lens(perspective) which you can share with your friends.
What space looks like is an agreement, if you disagree with someone then by choice you will see space differently, and that's totally cool. It will be like living in an alternate dimension or a mix between many.
By preparing models in it’s free time your Digestor works to prepare and balance efficiency and accuracy of shapes, materials, lists and mods.
Continually items are loaded from the list database, and processed and optimized.
A list of processes are performed on each object to generate information about them.
The item is loaded and rendered at full quality in a variety of lighting and angles and range of animation motion.
These renders are sent to an AI labeler which identifies what it is in the form of tags and optimization categories..
These tags can be filtered by the user.
Categories are checked so other processes can select the best method for it,
For example: If it is a grass blade, then a Noise Lod method of optimization would be used on this object.
These renders can be used for card type LODs.
A long list of test and optimizations will be performed on each item.
This library of methods is vast and below are only a sample of the options.
Deciding which ones are routine for a given tag or category, could be procedural based on performance reports, though it will be hand tuned at first.
Shape Sim
The object's shape and materials are reauthored with several varieties of expense.
The new shape is rerendered and ranked as to how closely it visually matches the Labelizer’s Renders.
The material will be reauthored at the same time as it is related to the result quality. Various methods of compression and stylizing and normal maps can be prepared.
The Server Instance will call the specific version depending on what it can afford.
Contact Sim
The is also a list of several tests such as launching the object at various velocities an another object.
Collision Mesh LODs: Designing a variety of collision meshes that are tuned for various detectable situations and resource requirements.
Destruction Sim
Simulating destruction to collect data on the results of the object’s specific shape and materials.
What does a Scrape look like at various depths?
Design scrape masking materials for emulating the result with less resources.
What does a Break look like when impacting at various angles?
Prebake variety of vertex animations based on the simulation for low resources.
What does a Deformation
Prebake variety of vertex animations based on the simulation for low resources.
Chemistry Sim
Methods could be used to prepare for the most expensive to load and render results of combined states such as animating paper on fire, or .
The Server Instance decides which Lod method to use/call out of the list database.
The digestor’s simulator can be called directly when a massive simulation is predicted to be needed in advance, so this would bake that vertex simulation to be played if/when the event occurs or similar enough.
Otherwise it may either need a more approximate simulation or it would need to slow down local space as an anomaly to simulate smoothly with given resources.
This is embedded in the server instance, and focuses on the emergency solutions to maintain the infinite nature of items and resources. For instance say you filled the visible universe with unique shapes at some point the resources would cross an emergency level that demands solution, the solution is chosen appropriate for the context, in this case it may make a noise procedural texture that is similar to the distribution of shapes and then swap out the shapes with a plane with that texture that approximates the same experience until a longer term solution can be managed.
Item changes are logged to recover old states and deduce patterns from for use in prediction methods.
Item histories are synchronized between servers that are closer to the User for faster streaming, based usually on strict timestamps so the later overwrites the older changes.
It prepares imposters to later replace the highest detail shapes when in low priority.
It regularly updates the skybox with universe changes.
It bakes old light or predictable animation.
It indexes lists for faster filtering.
It categorizes mod conditions and actions to optimize and batch scanning and executing.
The observer’s perspective is simulated to test the procedures to discover the best result.
It breaks up objects for granular LODing across large scalled objects(like landscapes or antmaning).
It creates transparent texture representation for small scale objects(like sparks or distant people).
It creates procedural texture representation of existing textures (like up-close dirt).
It can also compress models into vector displacement maps.
It prepares voxel and point cloud versions of shapes for volume calculations.
It prepares various maps of shapes for light & edge... references.
Your Renderer works to produce images of space.
Light, Lenses, Filters and other Mods define how the image is drawn.
Polygons,
Point-Clouds,
Voxels,
Nurbs,
Parametrics
Strokes
Generates
What form is model data stored and drawn?
This matters and it doesn’t, because the real goals are;
To store the intention of the operator accurately.
To render priorities quickly.
Each frame each shape in view is given a priority value to determine how it will be drawn and how often they will be visually updated.
This is what makes the Space Engine Limitless because the renderer will dump unimportant updates in order to maximize the detail of the shape in your hands.
How far from the camera is one metric for scoring priority, as the further a shape is, the less detail crosses a pixel. Plus it may not be necessary to update further shapes as often similar to how light takes time to reach us from distant stars so looking up into space, distant objects may be delayed.
If the operator is looking at a shape it’s priority is increased.
If a shape’s priority changes, whether you get closer or focus on it or wait, the method that it is rendered may change. The goal is for this change to be unnoticed so we use various methods.
Polygon Subdivision.
Gradual Voxel or Point Cloud Density.
Many Lossless Model Swaps.
Gradual Screen Space Transitions.
Humans have a threshold for change detection especially when they are not focusing on the shape. So when their eyes are not pointed at the shape you can change it if you go slowly enough to pass under their detection threshold. For shapes they are looking at you must rely on the DIgestor’s skill to make the imposter lossless, meaning the low detail versions are indistinguishable from the high detail shape at a given distance.
Foviated rendering detects where you are looking to focus it’s effort, but perhaps the opposite should be done to help operators from noticing changes as higher quality is swapped out. The peripheral vision is more sensitive to motion so a combination fo the threshold and fov could produce seamless results.
If an operator models a triangle, that will be stored as a polygon because that is the most efficient storage of a triangle, if they curve one of the edges then it will be stored as a nurb because that is the most accurate way to store the intention. If they want to render it simply we will render it as polygons. If we want to render it faster we will replace it with an imposter. If we want soft yummy lighting and accurate volume or more gradual decimation we will render it with sphere casted voxels. If we want even more accuracy we can use point clouds. If we want smoother animation for known journeys we can bake it as a vector. But we should juggle the pros and cons of each approach to maintain the two main goals.
There are so many ways to draw space. Explore a huge collection of concepts in the Case Studies page, specifically Space Cases
The renderer can predict various future needs and pass off such work to the digestor.
The renderer can use machine learning, denoising, strokes and other techniques to fill in the screenspace before an accurate update is available.
If an object exceeds a specific size relative to a chunk of space, then it will be included in a list(group) of histories that can be requested by neighboring chunks for updating their skybox. These can be animated and rendered and added to the queued skybox animation.
Using the causality speed limit of light any changes will be updated on neighboring skyboxes periodically rather than real time.
Skybox chunks can be nested and masked with closer objects getting faster updates.
Though these will be optimized LODed into a fewer layers as resources require.
This is the sauce that makes the impossible possible. Because all shape and other data is local it takes little time to prepare it in a united space from many local client perspectives. Then we merely stream the result to the operator on any device. Server power is scalable without the client needing to upgrade past finite specs.
Images need to be rendered many times a second and can have as low demand as a watch screen, but could have demands as high as being drawn parallax twice at an retina resolution, with 180 degrees FOV, compressed with enough time to send over the internet, and displayed at an imperceptible frame rate.
This aims to meet the demand of VR streaming.
This highest FOV/resolution demand could be smaller if eye contact VR is available.
It may be possible to also stream polygons/points to be locally rendered as possible, and composited with streamed images meant to texture cards rendered remotely.
Processors: 10 trillion flops
Internet: 1,280 Gbps
Internet: 35-70Mbps 8-16TB per month
Storage: 1GB for client software
Ram: 8-16GB
Processor: 2.5Ghz 2core for Decoding
What you are about to see is the minimum viable interface with maximum use cases based on the Interface Principles..
Statistics have been used to say that people don't change the default interface on their computers and then they use that to say that people don't want customization so they take it away. They also say things like "people are so simple that you need to make the interface simple, they're not smart enough to have control." I would say this is wrong but what we should do is make sure the interface they start with is really well principled while giving them the control in a kind manner so that they are willing to use the controls.
A shape that contains concepts, to intuitively communicate distinction and association.
Instead of containing like a box, it contains like stacks on a table.
In programming and math and other languages we use special characters to separate and group ideas. We use < > [ ] { } ( ) “ “ - : | , . / \ and space, paragraphs, returns, typeface, indentions. Each of these are used in inconsistent cases. If you use concepts with characters; 3<45>6, it is unclear what is an operation, a conclusion and what is a container?
With cards we can associate without the limitations of symbols.
Toolbars, panels, menus, fields, windows, buttons, pointers, and tabs are all examples of cards.
Shapes including other cards are inside the card, are pulled through the surface to form an impression of what’s inside.
Shades & Colors let you distinguish overlapping cards but you very quickly run out of distinct contrasts and loose the ability to communicate distinction.
Borders allow you to have unlimited nested cards, but remember too much contrast competes with the contents.
Shadows finally can be infinitely nested and variably contrasted, plus colors remain available to the operator to use for highlighting filters.
What if the interface has an actual orthographic light that orbits it in time with the sun and when the light is behind the icons light up like a city would at night?
As the sun goes down the bloom contrast effect decreases showing a greater shadow depth.
As the sun eclipses the horizon the bright halo causes the surface to be darker as our irises contact.
When the sunlight is no longer seen the emitted light makes everything visible as our pupils dilate.
By pressing a card into another, you can indicate that a card is active or a slot is available for input or a socket is available for connection.
Here are some examples of how indention has been used. Text fields, Pressed Buttons, Slot for Scrollbars, Slot for Mod Cards, Slot for Mod Wires, Etched Divider.
What data fields are editable with your tools current scope?
Using inset cards can indicate this
The toolbar when scoped, would for example expand to allow for a slotted inset to indicate you can move add or remove tools in this slot.
The corner radius of stacks must = the level beneath it - the distance from the edge.
This is an example of elegant and nonelegant corners with corrections highlighted.
Here you can see the cognitive load that show why this is elegant.
Removing the margin between the edges of cards in a stack as long as one margin is exposed to identify the stack.
Cliffing allows for unlimited stacks without losing space due to margins.
In your biases you can set how many margin layers to show before automatically cliffing.
The image above is set to one margin layer on the left.
There are multiple options to remove space between cards on the same level while maintaining distinction.
Since principally, content on cards should not overflow the edges of the card, if they intend to we should use alternative methods below to contain content properly.
The fact that it is overflowing could be indicated by
Flattening the end of the card,
Fading opacity towards the end,
Animating the card end with a scaling bounce....
This continues a list of content onto another line expanding scaling card’s shape vertically.
This simply cuts off the content at the ends of the card. It requires editing and moving the content, or scaling the card up to view the content.
Sometimes the content can be summarized to communicate the idea while using less space.
A play button could appear to scroll the content in a loop. Or a global play tool could start this, or your bias settings could make all overflowing content constantly scroll.
An eject button could appear that lets you scale up the card to fit the content and then collapse it back down to the custom scale. Or you could use the universal fold tool used for mods to do this.
This scales the content to make sure it fits inside the card.
It can be combined with Wrap or other methods to maintain a minimum readability bias. You could even scale just the last bit of the content as needed to get as much as comfortably readable as possible.
If overconstrained then the content will be tokenized to represent text in general for example. This could be combined with other methods to get as much as comfortably readable as possible, and just tokenize the excess.
Sometimes all the content almost fits in the margins, but would easily fit in the card, in these cases you can forgive the failure to stay in the margins, rather than cutting of the last letter of the idea.
If cards do not fit they can scroll off the card beneath, but there is also a bias that defines a number of cards that can pack at the end of a card to give the operator perspective where they are in a list. Notice how the stacked shadows naturally highlight the excess.
The empathy engine measures and maintains the operator’s reach and legibility.
This watch represents the minimum empathic scale of the interface.
From this scale you can access everything in the interface.
Swipe up or down to scroll through the current menu.
Swipe right to navigate the last menu.
Tap to open any concept, in this case, the add menu.
Swipe left to hide the menu and view the 3D universe.
In this case, a smartwatch would let you use motion tracking to navigate that 3D view.
Tapping from the 3D view will hover select the centermost shape.
Tap again to select or swipe up or down to scroll through the possible alternative targets.
Scaled Message
It’s even possible to show a full notification on a card the size of a long word with speed reading flashing. Words longer than 12 letters would need to be broken down into flashing syllable groups.
Automatic Scale
If you use a larger gift card sized screen from the same distance, or move the watch closer to your face, the empathic engine will scale down the interface automatically.
Whether the empathy engine automatically scales the interface, or you manually scale a card, rank triggers more information to be displayed according to rules.
In this example the first rank is an icon, but as we scale the card, the tool’s name appears.
If we had a larger screen we could continue scaling out exposing next the position HEX number which combines the 3 axle positions. You could copy this to another shape and it would move to the same place.
But if we continue to scale out the card the individual positions become visible by replacing the position HEX.
Next as we scale up the card selections become available, but not next in the list but inserted before and offsetting the information at rank 3.
These methods don't only apply to the display size and scale of the card, but visible scale. This means if one card is covered by another card/shape, whatever remains visible it's what the empathy engine has to work with.
Documents, lists, spreadsheets, databases, toolbars... these are all cards with multidimensional values.
As an alternative to scaling, you can scroll to expose more information on a card. This is vital whenever there is more information than there is space to display it on. Cards are like a window frame that you can view an infinite space through.
We need to develop a UI solution for displaying whether or not there is more to scroll or if you’ve reached an end. How do you know that there is more to see without trial and error? Is this best done with a scroll bar controller like on the left or some novel solution?
Buttons are cards too. They are often scaled down so much that all that’s exposed is a single icon.
All cards are buttons when you press them with a play pointer they will either do something by bias, or if no bias was set, they will open a list of possible actions by default.
Button cards clearly indicate the surface available for pressing.
An operator should be shown a button once and be able to recognize it anywhere.
A button should never be just icons/characters/models/words, but must be contained in a card.
In this youtube screenshot, where are the buttons?
Here they are.
Let’s fix it.
Still needs consistency and association work, but now at least the buttons are clear.
When you are near it with a pointer, it will change to highlighted mode,
it visually communicates that the system thinks it is what you intend to reference.
it’s icon animates to teach what the operator can expect it to do.
You can share highlights which are simply selections that share a unique highlight tag.
The system AI can also highlight cards, including buttons, for guidance when you ask for help.
You can bias hovering to actually scale up the card as a preview, if you hover off it will scale away again, but if you click it, the card would stay scaled open.
I considered integrating a highlighting button for each measurement in the measure tool, but realized instead there should be a universal highlighter to help you track how things connect. This will be accomplished with the Inspect Tool
When you click the play button on your mouse, or tap it with the play pointer, it will move to the next card and spring back in it’s activated mode. Then it will either expand consuming or tabbing the button, or follow other instructions in its Mods.
Buttons are never disabled. Would-be disabled buttons are instead approval for recommended required inputs(greyed in). Say you need to fill in some information before sending, The send button will be highlighted the color of the default inputs. For example the name field will be filled in with your name by the system with a grey highlight and the submit button will have a grey highlight to show if you click this it will fill in the field with the greyed in recommendation.
Instead of disabling buttons(greying out), make would-be disabled buttons, approval for recommended required inputs(greyed in). Say you need to fill in some information before sending, The send button will be highlighted the color of the default inputs. For example the name field will be filled in with your name by the system with a grey highlight and the submit button will have agrey highlight to show if you click this it will fill in the field with the greyed in recommendation.
In the past, progress bars have been practically impossible for the most part, because there where so many unknowns, being these were measuring unknown user’s hardware, unfamiliar side processes, and available fragmented space. For the first time we can solve these problems because its all running on our hardware and then only the screen is streamed to the user and what device they use doesn’t matter.
If a function is going to take apparently no time to complete then the indention animation should be sufficient to represent the command has been understood,
But if it would take longer, such as a procedural terrain generatio, artificial intelligence training, or accurate physics simulation... we can implement a loading bar to the button and/or the feed.
Sometimes you see buttons with an icon that indicates what the button will do if you press it. Other times you see buttons with an icon indicating what the current state of that option is.
UI should make it clear which is which, the reliable solution for communicating toggle is a toggle switch. Here is an example of a digital slider on the left and an analog slider on the right.
This is a controller that outputs a value.
You can Act Click to bring up the value’s card and directly edit it there.
You can also scale this controller to add more precision.
You can choose styles for the cards to indicate how a control can be used or to distinguish it further than the icon alone. Remember everything is a real 3D Shape with a real Material.
Tabs are cards that intuitively communicate geographic association by visually fusing the scaled up card with the button.
This is the most familiar appearance of tabs(seen in web browsers). This sticks out past the cliff’s edge. Unselected tabs underlaps the cliff. Here it is shown in minimal rank but these can be wider to expose more details.
This is an alternative elegant appearance of tabs. This is a group of buttons that pinch out to indicate the state of the layer below it. This allows it to be just another item in that layer as it does not need access to an edge.
This acts just like the others, except it does not break the card beneath. Instead it just indents to show that filter is applied to the next layer down.
An instance is the most familiar use of tabs(popular for browsers), for quickly toggling between various groups of filters. This is compatible with any of the appearances above. We’ve shown single icons so far, but these can be scaled to show more details per tab. Notice below, how adding more tabs, or scaling more detail than there is space to display, limitlessly scroll off the card.
A path is a set of instance tabs generated like breadcrumbs to show either your history or hierarchy.
Shows where you are in the furthest tab to the right. As you navigate to another card in this tab, it is now indicated in that furthest right tab and the previous tab is pushed to the left.
Shows the shortest list of steps to the top level of the hierarchy you have filtered for, regardless of your history.
Toolbars are instance punched tabs.
This is a toolbar with a path history bias. Rather than closing a tool when you open a different tool, a toolbar can be biased to stack new tools pushing previous tools down into a scrollable history, like a notification feed, so you don't have to hunt through the multiple levels of toolbars each time you want to switch between a few tools. The side effect this was originally designed to address is preventing popups. Here new tools appear in a predeignated space rather than blocking your view or out of reach.
Shapes with Meaning
A user needs to quickly and accurately identify the icons above just enough to compare them with the goal icon in his/her head. They can choose one of many methods including matching or eliminating a specific feature. The feature could be a distinctive(unique) shape, or color, or style...
The first row of icons sport many different styles, colors, and shapes. The developers wanted to unify their style and thus visually grouping their brand.
The second row was developed under a unified set of style rules to accomplish the group. This helped the user identify the group of icons.
Since color was also unified and the user finds color easier to search for, they defaulted to using it to identify the app which led to a high percentage of misidentifications. The shape was distinct but the color was the preferred method.
In the fourth row, by removing the color the user switches the next preferred method, identifying shapes. Since the shapes are unique they are easily identifiable, distinguishable, intuitive.
Color isn't bad, but consider its effect and by mandating your color you steal that from the operator as a tool for identifying groups or distinctions or other filters that are important to them.
See Humble Principle
A picture is worth a thousand words and an animation is worth a thousand pictures. We can communicate huge concepts with very little space. Hover over a button and it will teach you all it knows; what it does, the many ways to to use it, and it can even scroll it’s name 1 and a half letters at a time.
If yu have ever seen a list of icons looking for a specific one you know how difficult it is because your prefrial just sees dots so it can’t be used to narrow your search. But your prefrial is designed to identify motion so animation will help immensely to find your goal in a sea of icons.
One big problem with icons on buttons today is how do we know wether the icon represents the current state or the state that it will cause?
For state changing icons, we should show the current state and then when hovering over the icon animate how it will change and hang on the last frame.
Hovering isnt always available,
Eye tracking could count as hovering and trigger the animation.
It could always be animating while highlighting the final form as the result before looping.
Finally, it could instead be a toggle slotted card that shows both states and which one is selected like a switch.
Some buttons dont changes states but rather open cards or are shapes that can be added to space. These can be rotated in place on the button like a preview and even edited or used as if they were in space because they are, they are in list space.
All shapes in space are 3D and we should consider adding depth to our communication. This is important, if not to the user of an icon, than others who observe can easily recognize an icon even from the sidelines.
Alternatively from the sidelines bias could be set so that content always faces all observers.
What is a letter but a shape? All the shape tools apply to letters and all the letter tools apply to shapes. You can change the style of a font by adjusting it’s curves directly, you can save this style for later or update all your previous uses of that shape with this change. You can align and space your words and just as easily align and space your planets.
Since all shapes in space are 3d, so are fonts so why not taking advantage of that?
There is an option in Bias Settings to add typographical defaults to layers or focus or other preferences to visually prioritize text, but we will not set these by default as per the Humility Principle. We will let the user set their own priorities rather than enforce our own taste. Also these Biases do not support infinite layers because eventually the font will be to thin, small, or faded to read, so the empathy engine would need to maintain these anyways.
Here you can see here on the left and middle, a beautifully typgraphed card, and on the right the humble card. Since beauty is the in the eye of the beholder we will practice to humility by default.
For the feature that handles special positioning, see Measure Tools
It is possible to generate unlimited unique icons which can serve as placeholders for meaning until/unless an icon is assigned to replace it.
The ideal font would be procedural so it’s features could be dialed to best suit the use case.
You can control things like scroll directly with a physical control like a mouse’s scroll wheel, or you can use an onscreen control.
Onscreen controls are shapes with Mod rules, connecting a value, like relative position, as an output variable. Let’s build a scroll controller as an example.
First let's add a card.
Then we will divide it and keep our information organized.
Each division of this card has a unique ID that we can refer to later.
Next let's make our handle shapes.
Extruded rounded circle, add and name a point on both ends...
Constraint and align the handle’s movement between the points.
Add measurements from the points to the center of the handle and between the points.
Now we can do some math on our card finding where the handle is relative to point 1..
Measurement 1 Minus Measurement 2 Equals Difference
DIfference Percentage Measurement 2 Equals Percent
Let’s elevate this division(cell)’s rank.
If we scale down the card now, just the output number is visible.
And if we move the handle it will change the number from 0 to 100
Now add a Mod
Set when to “input changes”
Set do to “card scale position”
Set condition “X” by connecting it to the output cell from the card
Finally we set the condition “Target Card” to the card we want to control.
If we move the handle it will scroll the card up and down.
We can have gestures to control something but there should be no gestures by default as they can be accidentally triggered.
----------------------------------Add pictures of control creation.
There are many more features we could add to our control like;
pinning it to any active card,
scrolling in 2 dimensions,
scaling scroll speed,
and driving infinite lists.
But you get the idea of how controls work now.
Let’s explore the fundamentals for accomplishing anything imaginable with both precision and ease, and based on the Tool Principles.
Often called gizmos, they are handles that let you control things indirectly. These can be scrollbars, dials, levers... Any shape can be used to control any value.
Function: This displays what tool is currently applied by the pointer.
Target: This defines where the function can currently be applied.
This has many styles depending on your need;
Hand, Scope, Text, Long, Short, & Smooth; a loose posture to smooth movement.
Parent Handle: This dedicated space can be grabbed to move the Pointer, without obstructing neither the target nor the function.
Function Scale Handle: The Function can be grabbed to scale the card hiding or exposing more detail without committing the crime of popups.
Function Posture Handle: The Function can be rotated around the Parent Handle.
Target Posture Handle: This arm can be grabbed to change it’s relative position to the Parent Handle.
This modular design allows you to customize a pointer depending on the moment’s needs. You can use the following steps in any order;
Choose a function(tool)
Position the pointer
Activate the function
Here are some examples of handling and styles
The Select, Act, and Play controllers are simply more functions of this controller.
In VR different pointers can be attached to each finger each with its own tool.
The concept of selection is to identify a set of shapes or cards of interest.
From this you can perform actions to these lists.
Here is a case study which describes in detail how selection operates.
Select multiple fields and enter a value and they will all be changed to that value. When you select multiple items their parameters are listed and like definitions generate groups that allow you to defining the whole group by updating one field.
This is a third pointer that lists all possible actions for highlighted shape.
In the past, this was similar to a right click menu, but instead of a limited, layered popup, the list that is generated by Act has all the functionality of a Card.
__Add Image__
These tools are symbiotic you cant have a long menu that you can dive into to do anything if you don’t have the freedom to shortcut what you want on the top level.
This is a separate pointer that performs the action that the designer assigned to the shape.
In past operating systems, selection and action has been assigned to a shared button, but here the concepts are unambiguous and reliable.
__Add Image__
The controller(pointer) can have an icon communicating what the play action is before you click. You can expand the pointer like any card showing a textual description.
This opens a list of groups inside the selected object.
This can also be used to highlight relationships.
This adds shapes along your stroke, or modifies the material of shapes along your stroke. This can be configured to Define, Stamp, Paint, Fill, with a variety of spread effects and sticking rules.
__Add Image__
You can either use one pointer and configure it each time you need a change, or you can save your pointer configurations to easily switch between your most commonly used configurations.
Having multiple cursors could solve this keeping widgets or quick menus out of the way and switch cursors to use them while leaving your other cursor near the shapes so you don’t need to move back and forth.
Having separate cursors it's easy to set a position before you've decided which shape is best you can tap through the various shapes in the list and they will all appear at that position so you can see them in place as a preview.
__Add Description__
A handle is something that displays and controls definitions like a curve, color, position or any other tool. It’s important when designing tools to design with them a handle set that allows for either direct or indirect control/display.
How do you define a curve?
Referenced constraints like tangent, diameter, radius?
Control points like b-spline, cubical, bezier?
Why not all or any depending on the situation? A curve has many definable handles that you can make visible as needed.
This value attracts or repels.
In past vector drawing, if you want to add extra definition to a curve you have to divide it into multiple curves. This is great for storage but the user doesnt need to know this and it makes it harder to keep a line together. I've been addressing this by hiding the technical storage facts and alternatively using forces to deform it.
Agile Handles use seperate angle and power values, but since in the CoOS vectors are 3D, it gets complicated so we added the Force Handle Option too. It can be applied to points, edges, or surfaces. Entire shapes can be used to sculpt the other shapes with force.
How do you define a position?
Coordinates like XYZ?
Handles like Position, Rotation, Scale?
Axis like Global, Local, Custom?
Why not all or any depending on the situation? A position has many definable handles that you can make visible as needed.
You can resize shapes with handles not only after selecting the Size Tool but you can design tools to activate depending on what your cursor is near. For when you approach the edge of a card a Size Handle can appear so you know how much further you need to go before you can resize the card, and so you don’t need perfect precision to get the infinitely thin edge.
This design is meant to address the precision required in the past(seen below).
3 pixel precision and invisible trigger zones are not an Empathetic Interface.
We should adapt to the user and use margin to illiminate interface obstruction(friction).
How do you define a color?
Referenced HEX values, pantone values, or color picker?
Control points like RGB, HSL, CMYK, LAB?
Spectrums like pallet, wheel, swatch?
Why not all or any depending on the situation? A color has many definable handles that you can make visible as needed.
Anything can be a controller, just connect a definition on a shape, such as a location, to any other definition and when you change it, by moving the shape in this case, it will control the second definition.
See widgets in unsorted for more.
Materials can themselves be grouped in a volume. Much of the individual particles can be assumed through parenthood ratio, unless proximity triggers select measurements. Planet wide groups could have summary properties that allow for scanning without having to check real time.
__Add Widget Research from unsorted__
This visually groups your selection in a Card, or adds a blank Card for you to fill with a list later.
Use Shape Tools to control shape this list;
Divide concepts into multiple dimensions,
Extend to merge multiple concepts into a single concept.
For many purposes, filters are adequate for refining a list of existing information, but for some lists you will need to new information, you can use Wizards or Mods to generate these.
This changes how the list/space appears with stackable rules(filters).
This refines what is viewable vs what is hidden in a list or space.
There is no edit mode or play mode only filters.
Examples: Exclusions, Inclusions, Nested, Author, Time, Tags
This refines selectability of shapes in a list or space.
Like any list, space can be filtered to allow selection or other interaction with only what you refine.
This refines the order of the items in a list or space.
Examples: Creation Dates, Creator Names, Item Tags, Alphabetical, Numerical, Reverse...
If you don’t know what the name of an information type is to use as a rule, you can select the information from an example to filter the list of filters.
Refines the priority for each item in a list or space and can be used to simplify or hide the item as the card is scaled down, protecting access to the most and highest priority concepts as possible.
Examples: Search, Folder, Tab, Card, Microchip...
Search is just a custom View Scope, “Hide all but ____”.
Folders and Tabs are buttons that add a stack of filters to the master list in real time. The concepts are not actually “in” those tabs or folders.
Cards are scopes that filter based on the boundaries of the car’s shape.
Microchips are presets that scale and rank mods
Effect
These scopes can be activated fully or set to a lesser effect such as opacity or sensitivity.
This Forgiving Interface keeps your work safe.
Every action and result is stored in the Master List.
Anyone can go through this list and make a copy of any model or copy a group of actions to be used as a macro, animation, or tutorial.
You never have to save or backup your work manually thanks to History automatically saving everything, but you can copy the time stamp to a list be be later referenced along with all the items tagged with it..
You can reference any value, such as measurements, however brief, is stored forever, even from history if deleted.
Reference Methods
statically how the value was historically at a specific time
or dynamically reference it’s contextual field regardless of the value
or inherited, so if many items are set to reference one and that one is deleted the rest pick another to be the parent reference.
You can undo your last action,
You can undo someone elses last action,
If you change something someone else does a copy of that shape with your definition is made so they can view it as they expect if that is their bias.
You can refine your undos to only affect a selected group through scope/reach controls.
You cannot only redo what you undid, you can redo what you last did, or any other performed action.
If such an action lacks a definition it makes one up, if it needs a shape then it makes a phantom shape to be later replaced by more desirable geometry.
It looks for patterns to help you optionally repeat groups of actions.
This Humble Interface design provides an out-of-your-way space to receive messages. It can, like any app, be a single card and scaled to preview a single line of the latest message, or scaled out to see it all and respond.
When icons and intuitive interfaces aren’t enough, sometimes the best way to communicate an idea to the operator is just talk to them. You can use their existing message feed like another operator and communicate things like;
What I'm doing
What I'm loading
What's a shortcut to what you are trying to do
What I'm waiting on
What I don't understand
Answering questions
Asking for clarity
What I’m listing; selections, candidates...
When we us the selection controller we are hovering over shapes and int your feed you can see the system is making a list of candidates to predict what is the most likely shape you would intend to select if you click now. With your a second controller you could expand that list and select from it indirectly instead of space directly. After you click, you can see another list in your feed that the system is making from your selections. You can expand it and see it is tagging each of the shapes with a unique tag and then filtering by that tag. From here you can deselect, filter or even refine one, because the candidates are baked into each selection so by expanding candidates for a given selection you can choose another.
This is to help obsolete the annoying, in your way, popup windows.
Popups get in the way and slow down the workflow as I drag them to the side.
Each tool button you hover over instantly shows the tool’s window in the designated tools feed on the left pushing down previously opened tools.
If you click on one it will stay there otherwise it diapears as you move off the button. This is a way to preview and identify features without waiting for the hover over timeout to occur.
This timeout is also uneccessary for selecting features in the 3d veiwport. as you move your cursor around an object there is the near selections menu in its own tool window you simply scroll to pick the feature you are after and click to select. Alternativly you can click the general area you want to select from, then scroll through and select from the list in the tool’s window.
You can still detach windows as you can see with the calculator tool but this is now under the draftsmen’s control.
No need to waste space with tabs for each open project as this can be switched between in the outliner tool at just one higher level than we have today.
We can also customise the ribbons tabs so they match our personal workflows. Anything you need is availible in the red universal menu in the corner or by searching in the search field. You can simply assign any of these tools to a tab for easy access.
This isnt a designed feature, instead its an emergent feature of the Refine design.
Toolbars are lists that can become all-in-one superapps allowing you to tear off and remix any function from any other card. This is your taskbar, start menu, tray, file menu, toolbar, favorites, and more.
This isnt a designed feature, instead its an emergent feature of the Card Interface design.
Here are some ways that you can scale layers of the toolbar.
You can keep all layers visible and scaled down. By clicking the open tab you can close it again.
You can scale over the whole tool bar and everything will fill it, the highest layer with priority.
You can scale to hide the lowest layer.
You can scale to hide all layers except the highest.
I’ve changed the default card color from white to instead shade each layer differently, to highlight what layer is what without content(icons) which would normally make this obvious..
Finally you can tear off a layer as a standalone card so you can leave it open regardless of the layer. This allows you to open multiple tools at once. These can be added to a tool feed.
This tool adds shapes or groups of shapes, including text and cards with the Brush Controller.
Sometimes it’s an easier workflow to just use or make a Wizard generate changes to shape definitions.
Making a rectangle could be a list of definitions and in various orders, you can see this list and type them in and tab and shift+tab through them. This frees your cursor up to select references.
The opposite would be a gizmo that locks your mouse to definition rather than other uses.
A shape is made of these 4 parts, that can be grouped.
Points are defined by
XYZ coordinate values
Points are visualized by
Relative surface handles along around those coordinates.
Minimally default a single diameter is defined
This Sphere can be shaped like any other surface.
A material can be applied like any other surface.
Points can be a controller like a handles for defining curve length or any other value.
Points can also fill shapes for volumetric clay or clouds
Points can also acts like a 3D pointer/empty allowing you to aim your actions like placing new shapes or selecting existing shapes.
Lines are defined by
The path between two points.
Relative curve handles.
Minimally default they lie along the path producing no curvature.
Curve handles are points
Lines are visualized by
Relative surface handles along that path.
Minimally default a single diameter is defined.
Diameter handles are points.
This pill can be reshaped like any other surface. Reshaping is non destructive and is relative to the default diameter so the infinite edge acts like a bone for reshaped pill skin around it.
A material can be applied like any other surface.
Lines can be a controller like a handles for defining curve angles, or any other value.
Bones, paths, guides, and measurement annotations are examples of this.
Line connection can also be used to connect mods.
Hair, and grass are more examples of lines
This is not a pixel, but can be used and optimized as such.
Surfaces are defined by
A lofted nurb between lines.
Extra handles or points can be added to further define a surface’s shape.
Surfaces are visualized by
Relative mesh of points.
A material can be applied like any other surface.
This can be represented with a thickness definition.
This is not a polygon, but can be used or optimized as such.
Surfaces are defined by
4 or more other shapes.
A defined or lofted manifold(closed) solid.
A material.
Volumes are not visualized
But are stored for simulation, surface creation, and other processes.
Size By
Size To
On its own, the concept of a Sweep Tool is not hard to learn;
You can create shapes by sweeping a Shape Around or Along other shapes.
But here we can see 5 Redundant and Bloated tools; Sweep, Extrude, Thicken, Revolve, and Hole.
To master this interface you would have to learn over 50 features!
By consolidating these into a Concise Sweep tool, we can produce a fully powered interface that suggests you only need to learn about 7 features!
Copy Target
Paste
At Target
Along Target
Around Target
Target Where Clicked
Quantity
You can copy an object and then select another object’s specific field like X and then paste and it will find the X for the copied object and paste it into that field.
You can copy an object and then select another object and apparent paste and it will not copy the values from the first but the value that would make it appear the same. For example say 2 translucent shapes overlap eachother from your perspective, one yellow, the other blue. You use the tool to sample the yellow one where they overlap and instead of pasting yellow, it pastes green.
Selecting is just sorting by nearest to your pointer. Copying is just adding a numbered tag to all items in range. Later you can reference that list and use it to paste some of it. This allows you to point at the general facinity then later choose what from that area you want to use, be it this object or the one next to it, be it the object,s color, or it’s location...
What you see is a relationship between the Space Engine and Material Tags. If the Space Engine is filtered to only display shapes with the Material Tag "Blue" will be rendered as shiny, then to make a shiny shape appear you would either need to add the "Blue" Material tag to a shape, or Modify the Space Engine settings to include more tags.
Let's make a simple Material
Let's mod the Space Engine
See Emergent Tools for Procedural Materials
These are saved groups of material states.
Materials Values affect the results of the Space Engine. Here are the most common Values.
This can be used for things like lasers, stars, phones, speakers, and mouths.
One material feature is to be aware of the space around it.
This requires a location and a direction though more details can be added like sensor and lense size and shape...
You can constrain and mod and animate a camera shape to make presets that cover all the camera types; First Person, Third Person, Isometric, Map...
Camera sensors are on objects so your in-game phone could have a real camera on it.
You can add Image Render Mods to filter the cameras image. You can save the camera output as an array of points(photo) or a casted depth(scan) or a list of them(video), but most importantly you can use them as a reference to an historical storage of an event. You can recreate the environment and rerun the event from the time and location tags from a photo.
Materials can also be sensitive to sound and that can be stored.
Materials can be given unlimited values.
Mass
Strength
Flexibility
Mass
Modifier
Electric Conductivity
Heat Conductivity
Not hard numbers but change curves
Radiance
Decay
https://en.wikipedia.org/wiki/List_of_materials_properties
Mechanical Properites
Chemical properties
Electrical Properties
Optical Properties
Force Properties
Magnetic
Gravity
Acoustical properties
Atomic/Radiological properties
Thermal properties
Need to make a wide minimal list of states.
Materials can be given arbitrary State Values. Chemistry is a system of mods that determine how change states depending other values;
Water+Paper=Wet Paper=Dissolving
Dry Paper+Fire=Burning Paper
But more detailed values can be provided such as specific temperature values could change the state of the object.
Boiling Point
Melting Point
Need to make flexible system for defining state changes.
Tools of the past have either been high friction high precision or low friction low precision. In the CoOS measurement tools let you build without friction and then add precision later.
Other systems have you recreate dimension shapes if you want to redefine them, but CoOS has principally Humble Flow so you can redefine the original shape and what it is connected to. All shapes are dimensions, its just a matter of what is reported or enforced.
This is just a list of all the measures, each scaled into a single row. Scale between the 4 components to examine their details.
Joints, Bones, are just Measures too.
Animation is a measure that is enforced relative to time.
Instead of just 20inches these 2 points can freely move between 10<20 inches as a analog range or digital range among a set of dimensions such as 5, 10, 15, 20, or a combination of analog and digital like 5, 10<15, 20. It can include infinites such as greater than 10, or 2 separate infinites can combine into an analog range, such as less than 15 making the range of 10<15
You can apply contradictory constraints to the same shapes and lets you toggle between them.
Incomplete test
Just how there is both a default action and a list of all other actions for each selection, there can be a default center and a list of alternate centers to align with, a default margin method and list of all other margins methods.
But this isnt a special feature but a general feature as any parameter can have defaults and alternatives. This is just extra reference shapes that you can target. If you want to target a specific shape you can search for one of it’s tags, for example center, weight center, ignore descenders, etc.
These shapes can be preceedurally generated such as weight with an area calculation,
For things like ignoring descenders, you can select various features of a shape group and then group dynamically based on what is included or excluded from a feature list, and then regroup it so its outermost features are recalculated for aligning.
Targeting one shape, this can measure the distances between any distinct features of that shape. Targeting 2 shapes more methods become available.
You can select a greyed out method, but the measurement won’t be available until appropriate shapes have been selected.
Until then, automatically generated points can fill in the blanks.
This acts like an educational example
This also lets you define distances before the shapes.
You can choose from various Unit Types: Inches, Feet, Miles, Meters, ...
Or make up your own.
Decimal places can be unlimited but tolerances can be indicated by greyed out values.
While you could store an unlimited number the space engine may only process down to coos plank scale and this can improve over time as technology improoves.
With the Inspect Controller(tool, cursor, or hotkey) you can highlight related values.
You may constrain any distance.
The lock button indicates wether it is enforced or relaxed.
Supports varius Unit Types: degrees, revolution percent, arc minutes, radians..
.
Supports degrees higher than 360 for winding animation.
The 3 required edges can either be inferred or generated as placeholder shapes.
Multiple constraints including ranges of angles act like a hinge. When multiple constraint values are added, they are an addition relative to an origin profile.
Supports direction and origin toggles.
Line Connections Equalize Values or Groups of Values and can be used with Mods.
Equal can copy values one way “Backup” bothways, “Sync”
This is an Equal Measurement but along any alternative axis or groups of axes.
This shows handles that define and control the curve.
You can choose between any handle type, Bezier, Cubic, Quad, and even a new type, Agile...
Bezier
Cubic
Quad
Agile
This is a sphere around a line end with a point on its surface to show the vectors of the curve, these vector points are scaled up enough to show a number as the power value. These points also have an arrow that indicate the direction and if you pull on it the value will increase without moving the vector. This is actually a Bezier calculation but it allows more independent control. You can scale up the sphere to give more precision over the vectors.
See Force Controller for additional handle option.
The Control Points are handles but are also just like any other point so you can constrain or otherwise modify it as you will.
Smoothness
Flatness
This refines a controller’s position based on proximity, tag, or other value.
Moving a pointer around space is like scrolling through a list. Snapping sorts that list.
Include scroll tuning to solve Snap menu interruptions.
Measures collision. Also evaluates stress/competing constraints.
When I was working on a collision solution I came up with a new coordinate system which is relative pairs based instead of objective cartesian XYZ.
How it works is each vertex has a latitude longitude distance to its nearest unique neighbor. If this is how its stored its already available for collision calculations.
twice a frame these distances are sampled, some math is done regarding the size of the objects, speed, changing direction towards/away(which would mean its passed it) all this to output a check rate and correction which registers a collision and the material rules affect what happens to the position then.
The goal is to have perfect collision detection as efficiently as possible, and this technique would bake in a critical value into the data structure.
Unfortunately things like mass had to capture many more relationships so mind bending. But there is also the possibility of LODing gravity with pair behavior which is cool. A mass acts on clusters like a shader where each motion is proportional and cheap and clusters act like a single mass.
This enforces any value, be it a position, or distance, or connection.
Like parametric modeling, constraints too have a memory and combine or override past features. If newer memories are removed, older constraint memories can be in effect.
This evaluates a list and creates a list describing a comparison, using many methods;
Greater or less than, by how much, shape smoothness, material contrast...
This can be enforced to maintain a difference or simularity.
A network of connected ideas.
The letter P has a meaning connected with multiple sounds, multiple symbols(fonts) and itself can be connected to make up other meanings like words or faces :P
This is very powerful because if you want to for example find all numbers in a document you don't have to search for 1, then 2, then3... instead you can search for “number” itself.
Side Effects
Solve multiple communication bugs, such as attached protected definitions, nonlinear conversation trees...
Numbers are nested inside meanings such as units(in,seconds,ml...) and that is nested in meanings such as type(Integer, Floating Point Number, Fraction, Percent, Unit Type, Ratio, Letter, Word, Sentence, Paragraph, From Group, Dimension...)
This acts as a constraint to hold the card’s value within a range of possible values.
Here is a list of suits a property could expect though you can make your own:
Integer, Floating Point Number, Fraction, Percent, Unit Type, Ratio, Letter, Word, Sentence, Paragraph, From Group...
Connect text to any concept (Shape or Card)
Concepts can be connected passively by connecting
Adding tags allows mods like filters to look them up.
Tags can stack, so a set of tags in order to emulate directory's.
An selection’s meaning can be reported or enforced to constrain the meaning of a value.
See Meaning
See Typing
Many more measurements/constraints are eliminated by these fundamental measures.
Coincident, along, tangent are all Distances of 0.
Parallel is an Angle of 90degrees, Perpendicular is 0degrees.
Today the world runs on code, every message sent, website visited, screen swiped. Code is a language only a fraction of the world can understand. Imagine if only 1 out of every 400 people knew how to write. How many world changing ideas would’ve never seen the light of day?
If you have to hire a professional to mod your stuff then modding is not intuitive enough, because to describe what you want should be adequate instruction for modding itself.
With interface design we can visually make your instructions intuitive.
With only a few words to learn (When, Do, Logic, Process), Mod is the easiest language in the world.
Now you can focus on your goals, on solutions, on your potential and moving the world.
Rather than flow based programming; handling how the computer executes, as many no code tools were, Mod Tools are Concise Instructions based on capturing your intent.
When a Mod and Form is chosen, related Definitions are listed for you to detail your instruction.
In this next section we will go into detail and many of these features are for implementing Free Flow.
Actual modding requires much less information as one approach only uses a handful of these features.
Programming starts with an instruction. What action do you want the computer to Do? Drop in a Do Mod and select the Form of action. Once you do various definitions will be presented and automatically filled in with default values. You're done! The computer has been programmed to do what you wanted. Alternatively you can redefine these defaults. Or add other mods for more selective instruction.
But what if you have conditions? Connect a When Mod and you will see it has already picked the condition form, “Now” In fact any Do without a condition specified, is by default a now action which is why you might want to set a condition first if you want the Mod to wait or check something before acting. If we select a different condition form such as “Time” the action will be done at that time.
You can combine Do mods too, and choose how you want them done. Sequential is the default Process Form. Drop in some more Do Mods and when activated by either of your conditions, each action will be done in order immediately one after the last. Drop in another Process Mod and chose the Process Form “Priority Lane”, and you can select another lane to help the Mod run faster. The more Mods that are in a lane the slower they react. There are many other Process Mods that can help your program run faster, or just in time, or together.
This could be anthropomorphised as a people and streets and boxes to intuitively be able to optomize processes visually.
This will let you combine and compare multiple When Mods before activating the Do Mod. By default there is an understood AND logic applied to any mod. Let’s instead choose the OR Truth Table from the Logic Forms. According to this truth table either of any conditions can activate the Do Mod.
Truth tables are an unlimited way to design any form of logic gate.
If you make a logic’s meaning viewable, you can see a card divided into rows and columns.
Down the left side we have all the inputs.
To the right we have a list of all possibile combinations(cases).
And at the bottom we have a result for each combination(case).
You can rename the columns(cases, combinations) and rows(inputs) and rather than just 1 or 0 you can use any input value in each case, such as true, false, on, off, blue, up, go...
In this example we show 3 inputs but you can have unlimited inputs and a scroll bar will appear to show there are more to be browsed. In this example we show 4 cases but there are more to the right if you scroll over. The columns and rows are just indented to highlight to emphisize the relationships.
If you use the Act Controller on the When Form, you can choose the Mod Form from an endless list.
Like any list, you can Filter and Scale it to find what you want.
Where do Mod Forms come from?
Once you select one of the Mod Forms, more cards will automatically appear. These Properties have Values that the Forms relies on. You can change these default values to further define the Mod, or leave the default values and move on.
A value is any content of a card.
Some properties are looking for a specific type or unit format of data, if is used that is not suitable then it will be converted into a suitable value.
Data is by default considered typeless, unless the operator specifies a type, meaning if a value is not defined, the type will be interpreted through context and biases.
How is it being used? What is the typically preferred type in this context?
See Meaning Tool
Lines can be used like wires to connect cards.
Though by default wires automatically measure to detangle and smoothly transition between cards, wires can be shaped like any other line and materialed or animated to indicate whatever is useful to convey, like flow and activity.
Physics can be applied to it’s material such as yarn to give it a more physical response.
Like any line, it’s material can be made invisible, in effect it’s link to that value appears wireless. This can be used to unclutter and filters and tags can be used to make groups of wires visible or invisible.
To consider more connection functions see study.
This is like a hole that let’s you plug in a line to copy’s the card’s value to anything you on the other end of the line. Drag from this output to generate a line, and drop it on another card. Like every shape this is a physical shape and with it you can make actual microchips, pcbs, cards or cables that can be physically connected to physical pins and the values they update will perform the instruction.
The input sockets are the Cards or Shapes themselves. This allows you to modify any level of a Card with the output of another.
The form icon can be preset to indicate what form of input will work best.
Add image
With Mods you can make your own Controllers and with Controllers you can change Mod values.
You can connect Shapes to When Mods to control When Mods and you can connect Do Mods to Shapes to control Shapes.
Add image
You can give Mods memory by save information to a List like a database. You can also change every aspect of that database and in-turn, send information to other Mods.
Like with any object in CoOS, you can arbitrarily connect a List to a Mod as a tag or comment.
Like any other List, Mod’s can be scaled down to a single bubble. Rank also applies which comes in handy for microchips. Combine mods and Rank the Properties you want to expose and then scale it down so you can just see those Properties and add a Scope so you can easily scale to that size.
Add image
You can drop multiple Mods into a List and choose what features are exposed when scaled down with the scope tool. Now this acts like a custom supermod to easily plug in and access Properties .
Add image
These are Multidimensional Lists, that define the outputs for any combination of inputs.
Add image
You can make your own customized logic by making a list like this.
Then tag the list with the stack Mod>Logic>Type> then the filter that is automatically used when generating the list of logic forms, will include your’s.
Multiple Lanes
Simultaneous Processing
Frames
Queue
Pause
Developers identify the basic instruction primitives that serve as the minimum vital building blocks for any general use case. In away mods are already completed programs that you mod as needed.
In fact the tools we have already covered controls, list, shape, material, measure... are all mods, specifically they are the most basic instructions that can be defined and combined and conditioned to do anything. When you select the selection tool and select a shape, you are actually running a Do Mod and choosing the Select Tool Form, and then filling the Properties with your selections. All this can be controlled by Properties and arbitrary values. So with this in mind you can create a mod that selects every third point along a line, and save that as a tool for later activation. The perfect tool is one that can create any tool.
All the words used for Mod Forms and Properties can be customized to be more intuitive with what ever language you are familiar with. These custom names can be shared as a skin to other people within your culture to help them acclimate.
Are you a programmer in C# or Python?
Are you a trained technician or a chillaxed gamer?
Are you a chef or an athlete?
Apply the language skin that feels intuitive to you.
When/Do becomes If/Then,
Forms become Classes, or Sides,
Properties become Variables,
Mod Language is open like the rest of CoOS.
What words should we use to describe an instruction?
The answer is evolving with the field of programming and the world’s languages, so Mod should be designed to be just as flexable. You choose the terminology and the entire community of programmers contribute to a shared unending list of immutable functions.
why waste effort learning a language when you already know a language.
the modding terms should conform to what you already know thus making it easier to learn by giving you more energy back to invest in other skills.
These concepts should be baked into a language system that automatically translates tutorials and discussions about modding.
Similarly the system should translate your gestures to activate tools and handles within the context of your custom environment, so if you don't have the tool on your toolbar it will access it from a searchbar or use and display your current hotkey for that tool.
Accessibility is critical when communicating to a computer. To that end we have multiple views that answer various use cases.
Each instruction is self contained where lines are only used for copying values. This is what was described in the Mod Tools and prevents confusion over what is required.
These are distinct nodes that can be connected with lines. They have the advantage of visually being directly related to what they are modding. Here the conditions will be connected to a reaction by a line.
This will be approachable in space like any build project, you can jump on connection lines like rails and ride along them.
This infographic presents the instructions to maximize awareness and access to key controls.
Develop a Directory List View for Mods to implement the Super App Tool Principle.
This turns spaghetti mods into a single refinable searchable sortable scopeable list of lists.
Microchips and Memory cards, PCBoards and Cables. Plug and play space with simulated functional gadgets.
These examples of lenses can apply to all lists.
You could view your list in the form of a literal old library building with halls of books and multiple levels all sorted physically. Many lists will act like curated collections that you could join with a library card for fun.
Classical Style Notes
Super Dark Theme,
Color Coded Sections,
No Icons,
Exaggerated Indention,
Unaligned Values,
Embedded Comments.
Every option has a
default setting chosen by the developers,
an empathy setting dynamically chosen by the CoOS,
or a bias setting chosen by the operator, which overwrites the default or empathy settings.
These are values that define how the system acts. From here you can change these.
Connections, Display, Sound, Sensors, Notifications, Permissions, Power, Storage...
Consider the restaurant waiter. You give your order then they go on to ask you to specify many details about the order, would you like to super size that? Medium well? Combo? Annoying and unnecessary because you can add any details you wish when you make the order and if you don’t specify the waiter should just apply the default biases.
You can switch between groups of defaults by applying a profile. Profiles could reconfigure all defaults or a small set of defaults or even individual presets.
This is useful for pre scaling cards, so you could scale a card and sort it’s rank and then save that setup to an already existing profile so anytime that profile is triggered it will rescale this window with these settings.
Profile use can be monitored for scoring performance. Arriving near a planet could trigger the delivery of a profile that limits your abilities. If you wear(activate) the profile it will reconfigure your biases and your performance/accomplishment will be measured in that context rather than cheating like (saying you signed a peace agreement despite alien objections during a trade agreement, when all you did was alter their personalities)
Profile use can be monitored for immersion. If you don’t wear the profile you may also be invisible to those who are already on that planet wearing the profile. This could improve the immersion into the scenario for profile wearers so for example (a hundred foot unicorn doesn’t interrupt their antient england cup of tea.)
Alternatively to hiding you, the profile could transform your appearance to conform to the theme from wearer’s perspective while from your’s nothing changes.
These can have defaults but these are always replaceable.
The one exception is a ripcord.
This a safemode way to access the main list in case you deleted other ways to get there. From the main list you can get everything back to a usable state.
#hotkeys #shortcuts
Instead of just dedicated hotkeys and mouse buttons, when you hover over a tool and press a key, that key is what you applied the tool too like a brush into paint.
But all functions are available in the master list.
Another palette style option is to predefine any set of keys, but instead of targeting a specific Button ID, they are pointing to a location. So if you Bias Feed a handful of keys such as Q, W, E, R, T, Y, then define the tool bar as the targets. For whatever menu is open the top most button is Q. You could stack rows of keys to manipulate multiple levels of a viewable tool bar in a single click. You could even drag your finger across the keys to scroll. This frees your mouse’s cursor to stay in the game instead of having to go to the dugout everytime there is a change.
Biasing a tool(such as a measurement constraint) isn’t pointing a tool at geometry but is like mixxing paint to be later applied. It can have any number of definitions preconfigured. When a constraint is said to be applied to theory it means it has no geometry but yet is predefined or biased.
Profile
Knowledge:
Tools
Skills
Languages
Wetware
Vision
Hearing
Touching
Bias
Display
Scale
Contrast
Sound
Sensors
Touch Screen
Augmented Air
Buttons
Card
Scale
Fold
Styles
Rank
Colors
Names
Simulation
Environment
Distance
Noise
Intention Recognition
Model
Test
Simulate
Predict
Retest
Description of the Problem including examples.
Instructions and Principles for a solution.
Meaning Connections
Macros
This is a recording of actions that can be replayed to repeat those actions in space.
It can also have algorithms that adjust the recording to fit the situation procedurally.
Once the solution is identified, a macro can be used to teach, generate examples, or directly perform the solution.
This is like a video tutorial but better as it is actually performing it in-world
The macro can be sped up to just provide the final result or result example.
The user can give an example of a set of actions and then paste that action on other objects in the same style or pattern. For example you can connect the output 3 of node 1 to input 1 of node 2, then you can paste that example on node 3 and it will connect the output 3 of node 1 to input 1 of node 3. You could further refine the example by point out reason for choosing input 1 such as the number 1, while you could have chose the input name in which case it would ignore the number.
This requires the skill of searching
This requires knowing terms
This includes community conversations
This includes knowledgebases sorted and unsorted
A guided step by step definition tool created by the community to optimize workflows.
Categorize all problems from abstract to specific
Guide operator to narrow down knowledgebase to solution’s Help Card Community submitted/maintained?
#AI #VOP #NPC
VOPs are like Wizards but with awareness and autonomy.
A VOP can assist by performing any of the above methods for you.
They share solutions based on interpretation, confidence, and operator's bias(preferences).
For context about goals, struggles.
You can share where you are headed and why you are doing what you are doing and the VOP will listen and possibly use it to narrow down its list of solutions.
It's not enough to just watch sometimes the VOP needs to learn how much you know so it doesn't overwhelm nor patronize you. It could do this through tests and questions.
For and for preparation of solutions of possible problems and so faster responses.
The VOP could offer solutions unprompted by the operator, if the operator seems to get stuck, or if tedium is detected which a known procedure could solve.
The VOP can silently make recommendations to you in your message feed rather than interrupt you. This is up the your biases.
The operator could prompt the AI for a solution by just saying "what do you think?"
Because it has been gathering context and understanding ques, it may be able to respond with a solution with little clarification.
If it does need clarification the AI could ask pointed questions to raise it's confidence.
Through design we can implement the Forgiving Interface Principle and help prevent fear in operators by correcting mistakes and obsoleting pride. Fear and toxic words convince people that they arn’t creative. By disprooving this we invite more to embrace their creativity and the CoOS.
Autocorrection / Interpretation
Autodocumentation
Error Testing There is no errors just unexpected outputs
Autofilling default inputs
There is little ambiguousness from text because rather than disconnected strings of symbols, autofill connects what you input to a wholistic and networked meaning.
Flatten the learning curve.
Equalize alternative approaches/workflows success rate.
Pros develop solutions but AI humbly offers those solutions.
Replace Pros with algorithms to procedurally clean, repair, or enhance or document your shapes, etc...
This document is full of definitions for each principle and design but here are some terms used throughout that have not been clearly defined.
Dev | Developer, Programmer, Manager, Designer, anyone who cooperates with this vision. |
OP | Over Powered Operator, User, Customer, Player, Modeler... |
VOP | Virtual Operator, AI, Artificial Intelligence, NPC, Assistant... |
Shape | Geometry, Model, Sculpt, Polygon, Curve |
Material | A shape property that changes light that hits it, including color, texture... |
List | |
Filter | |
Mod | When, Do, Logic, and Process are the categories that combine to form instructions. |
Form | A specific When, Do, Logic, or Process chosen from a list. Proximity is an example of a When Form |
Property | A changable setting for a form. Target is an example of a Proximity Property |
Value | The current setting for a property. 20meters is an example of a Target Property’s Value |
Meaning | This is the network of tags that allow both humans and computers to understand an idea. |
Bias | Settings, Preferences, Options, Configurations, Profiles |
CoOS Wizard | Cooperating System, more information A guided step by step definition tool created by the community to optimize workflows. |
These are ideas we need to explore and sort into the design but will not necessarily move intact as a new feature, instead it will more often integrate into existing features.
List Features
In documents you hit enter to return to the left of the next row, but there could be a shortcut key that could move you to many places like home end but also many custom locations like the beginning of sentence, word, paragraph, the next "M" etc. This would allow for more procedural algorithms like going through a large text to adjust it.
With clever techniques, Revision Graphs (like GitSVN), and Causality/Relativity Chunk Propagation, we can create infinite things with finite resources.
You could draw a line and set its’ length to ∞, instantly it will render to the extent of anyone’s perspective who is in the same chunk as you. And if they run along the line they it would forever draw head of them in real time, if you teleported chunks away however the line may not be there because of a causality speed limit which propagates updates across chunks. You can wait, or you can select the chunks between you or the object and make a causality exception so you could for instance shine a laser pointer to a distant starsystem to point at things.
Another necessary system includes the file system. You could literally select the all models ever made and set to Blue, this doesn’t change it for everyone else, it makes a copy of these models that are blue from your perspective and anyone else who shares your lens. But wouldn’t this double the storage space required? Nope, because changes are virtual and procedural. When you do this it actually changes nothing except the models immediately around you and some instructions in a list. If you dropped any models then from the list it would at that point change it to blue based on your earlier instruction.
Of course this would be crazy so history would let you undo this, but the infinite standard allows for any scale of project less than infinite.
We already use procedural generation to fill the details you don’t care to, but we will definitely also use AI that can be instructed grainularly to ‘create interiors of buildings with history of use’ for example.
Each spreadsheet cell should have their own spreadsheet.
Lists of lists is effectively multidimensional data.
You can continually nest lists and so have unlimited dimensions.
In the Cooperating system you don’t make mistakes, you iterate refinements.
When you click select, it may select it’s best guess out of a list of candidates near the pointer, but you can choose from that list on a card to refine your action to more accurately represent your intent.
Instead of nesting methods we have all methods on one level with tags like everything else. You can sort and refine by many methods such as popularity, search, usecase, time, etc. That said it is possible for the creator of the method to include alternative methods inside their method for easy switching. This simply opens the other method and applies any definitions you already had in the first default method.
Depending on your selection and test cases(performed by the method) the method can change behind the scenes to use another method to make your request work. For example one method that extrudes(sweeps) a sketch from a flat surface may be fast but if the surface is curved it will automatically switch to another method that works on curves better, and you won’t notice the difference unless an new definition apears when you define the surface.
In past cad software treated 2d geometry differently than 3d geometry, in the CoOS all geometry is 3d. You can constrain any shapes as if they were one 3d object.
Many cad systems artificially constrain freedom of motion but many times it would be better to simply apply collision as a constraint. You can exclude or include tests.
You can divide a surface, but not always in the center, you can define exactly what proportion of the shape you what to divide such as a pattern, ratio decimal percentage etc.
You leave tags on everything you touch. These are automatically applied for the system to make informed decisions.
You can manually add to this trail, then everything you do will have that tag applied then you can later filter for constraints for example and disable all constraints with that tag.
This is a constraint that is not fully defined and can use other partially constrained shapes to complete itself.
This can be used to quickly assemble parts.
This can be accomplished by adding a constraint to a shape, define one of the targets, as Mate.
Later, you can filter space to show undefined constraints, then easily connect two mates together.
This fills in the missing definition with one from the matted constraint.
It is important to be able to do everything from either/both 3D space or/and list space
What are the minimum components of a widget?
Pertinant Handles
Constraint Highlight such as snapping to a line.
Measurement Annotation(axis, arcs, ends, past, present)
This could use the same as any measurement annotation; Distance, Angle, Etc.
Floating Cards for typed definitions,
Visible so it is obvious what listed definition is associated with what aspect.
These can be tabbed through but should be predefined.
Notifications displaying what it is expecting/waiting on.
The utility has to outweigh the obscuring of space. This is almost a popup so it is critical it only appears when the user requests it directly. The goal is to stay in 3d space for as long as you want without tying your hands. It’s better to drag a card open from a tool bar but you can choose to click it like a button that can trigger the opening in a designed or biased space.
You could activate a position widget without rotation or activate both so they are available to use and stay out of the way when they are not.
You can also group selections so that one visible widget is for one group and another simultaneously visible widget is for another group.
While it is common to use 3 perpendicular axis with a global alignment, these constraints can be removed and custom axis can be added to widgets.
You could tab through definitions in list view but each value should be predefined by default so multisteps should not be necessary. For example,
You are defining the Position but the
Scale is defined by biases minus reasonable margins from your perspective and the Rail is defined by an invented rail to be later replaced, or reshaped, or left as-is.
These can be viewed, selected, or reshaped however you can do any shape.
Study BricksCad Widget and Direct Edit (below). Conclusions (above)
There are many ways to define a curve and each could be displayed with various handles and controls. You can toggle the visibility of each of these controls depending on each situation.
A widget form of displaying what is constrained and how will be useful.
In-mates are half of a constrain highlighted for easy access.
Tools can completely transform what your pointer does or it can tweak it slightly, like a palette you can dip your brush in red paint, or you can mix in some glitter and get glittery red.
Sweep
Contain
Lasso
Paint
Intensity
Size
These are attributes you can adjust about your pointer regardless of the tool that is applied to the pointer. How they apply to the pointer can vary by default or bias.
Temporary before going back to defaults
Stick to the tool so the last setting is applied next time you use the same tool,
Globally apply to any tool.
You can activate a tool;
it’s button will depress showing it’s active,
it’s panel will show in your toolfeed,
and it’s handles will appear in 3d space,
but then you can then activate another tool;
it’s button will also depress showing it’s active,
it’s panel will push down the first tool in your toolfeed,
and it’s handle widget will appear in 3d space
Both tools can now be seen and used simultaneously.
To deactivate one of those tools you can select each again.
This could instead be something like a Shift function so activating a tool overrides the last unless shift is held during the click.
Should we calculate predictions all the time or just when the user asks.
Underline the cross through buttons on YouTube live chat and live captions on Android and how it's difficult to tell whether the button is showing the current state or what will happen if you push the button.
In the past if you subscribe you get whatever is in that list usually a channel or an artist and those are so broad the content you could recieve could be 80% of stuff your not interested in or why you subscribed. Now you can subscribe to very specific feeds and filter them further and further until you only get fed what you want.
Let’s say you have Cards with Mod rules that say;
“if text that overflows the margins, clip the text.”
.
This image shows the “Color Correction” could have been displayed fully.
What if by overflowing the text by a couple pixels it would have completed the intended idea of the text? Should the UI break the rules to prioritize the user?
Can we bake in rule breaking into the priority bias settings?
This also shows why fading is not a good UI for showing there is more in that card, Instead we chould use the card shape to indicate the cell has been clipped.
Context Menu Widget
This floats near cursor so you dont have to go to side menu.
This obscures shapes/space.
Before and after display.
Original shape apearance could be an optional display but defaulted off.
Original widget definition could be an option but may not be more important than space visibility.
Here we can see the option to use a slider to change the visibility.
Movement Widget
The inner white hollow circle positions from your perspective
The arrows position along an axis
The outer white hollow circle rotates from your perspective,
Grabbing the white filled circle treats it as a rotating trackball
The cubes scale along an axis
the arcs rotate along an axis
Stackable Movement Widgets
Orientation Widget
Generate Wizard Widget
Generate Multistep Wizard Widget
You could tab through definitions in list view but each value should be predefined by default so multisteps should not be necessary. You are defining the position and the scale is defined by biases and reasonable margins from perspective.
Ruler Display?
I think this is unneeded noise.
Offset Axis Cursor
Sweep Along
Sweep Around
Edge Widget
Offset Widget
Bevel Widget
Use a line as a selection brush so you can select a line of points and refine it later like any other shape.
Common tool that adds a string between your pointer and the dragged object to dampen your subtle motions’ effects on the shapes.
Imagine the surprise when you look up at the stars and sort them in the sky filtering and searching through them.
Space is really big, so if humanity spreads out into infinity how can they protect community?
The Population tool will unite humanity and solve MMO loneliness plagues.
The ideas is that you can populate your surroundings with real people;
You can retheme this avatar to better suit your environment.
Any time someone approaches your clone, a clone of them will appear and approach you at your original location. You can interact with them, mute incoming interruptions, or simply get a notification that someone is trying to interact.
You can also get a notification of where your clones are being spawned and switch places leaving your original location automated instead.
Use Cases
You can set various definitions about how scattered or clumped to randomly place clones as you travel.
You can also build a planet and cities and purposely generate an appropriate sized population for each city.
You can adjust the AI settings to add more routines or other features.
This population tool could also be used as a primitive for designing AI as a designer.
Start with a real person and then augment more and more until its a full AI or stop at any point and the real human augments what the AI hasnt been designed to do.
I love the crafting system from Don’t Starve, it's obvious what types of things you can make and what you need to do to make them.
I want to explore using this type of system for tutorials.
You gather tools as physical objects and
then once you have them all you go down them as a list of steps
letting you define the details as you go to create food or house or whatever.
Seeking out the tools practices holding what the tool looks like in your mind and what you want it for in your mind in a way that helps it stick.
Use Case
Say you want to make a star, you could search star and the everything list will be refined by the search filter.
Things that may remain in that refined list are;
Star shapes complete and ready to plop in your space,
Star wizard which would step you through a process, (this could be a crafting quest)
Star parts, that you can mix together to assemble a star
Star experts, people who you can contact about making stars
Booleans have classically been a data type of the states true or false, but this could also be considered 1 or 0, yes or no, on or off. It should be many things and in the cooperating system it can be.
Example
Say you add a switch to space. Now connect the output of that switch to a card and you will see a value “OFF”. But this is just a bias(default) minimal view, if you scaled up the card you could see it has a list any outputs, meaning any of these could be considered by a Mod.
Now say you make a Mod When(Condition) that looks for a value to become “True”. If you connect that Mod’s target to the switch, and then flip the switch, the Mod will be triggered because it not only has an “ON” output but also a “True” output.
You don’t need strict data types like in classical coding, instead Mods have flexibility and forgiveness or you could make your own strict data types as needed.
In spreadsheets if you sort AtoZ it will reorganize every row but what if you want to sort sections(groups of rows)? With nested lists you can sort by layer.
Primitives
Tutorials
Interfaces
UV
Importer
Search, filter, spellcheck
Procedural generation
Animation, sticky feet
Messages
AR clones, AI
Custom Mod Blocks, Templates
Shortcuts, Wrangling,
Because a link is connected to the ID of the object at a certain point in time which is never deleted so even if it's removed from space it can always be found with its links unlike websites where links break 404 all the time.
Letting you leave Visual representation as such as the code rather than forcing it to be translated into linear text. This flattens the learning curve priciply because you can use the graph directly by connecting the content to a mod. The following NFA graph best represents some data and its relationships. You should be able to directly manipulate it rather than have to go to the list format.
Like with selection, dropping a brush could offer a menu so you can refine what your are connecting the end of a line to more precisely than initially dropped.
These are cards scaled down below the minimum legible scale, appearing as points(Verticies) to show there is something there that can be scaled up if needed, yet stay out of the way otherwise.
Commonly when creating it is difficult to switch between typing and hotkeys, selecting the text object or painting between letters.
If you try to make a Wormscerw Hose Clamp, how do you put a pattern of holes into the band and then wrap them based on a curve later?
Designed by Harry Heath, this modifier proves this real time deformation is possible.
CoOS Design Guide (this) Desktop Full Mobile Full
Normal