By studying the software and hardware technology that has been developed, we can build solutions for Cooperating System challenges and project the future limits or lack there of.
Neural Radiance Fields and Gaussian Splats
OpenGL, or DirectX, or Vulcan,
Linear Algebra, Vectors, Matrices,
C++, Python, or other suitable languages,
Data Structures, Pointers, Memory Optimization.
He made a game engine in 30 hours and Uniday Studio International is a discord server for reaching out about engine development.
https://www.youtube.com/c/GuilhermeTeres/videos
 OpenGL is the alternative to the DirectX 9 rendered for older PCs or machines that have low-end specifications. It also has a pretty high FPS compared to DirectX 11 and Vulkan but also has noticeably reduced graphics quality. The OpenGL renderer is probably the best option for older and low-end systems. - Vulkan shows the best of what the latest generation of renderers has to offer.
Could we build this inside Unreal Engine or another existing engine to take advantage of existing features? Are there licensing issues or scope issues?
UE4 released a Planet Scale Tool
Megaton Rainfall has a Universe Scale Rendering Engine
Microsoft Flight Simulator has a Planet Scale Engine
What if you could feed a mahine learning algorithm identified and depthmapped objects and then let that algorithm transform similar objects to look just as photorealistic in real time? This is already being done at 30fps.
What if high quality models are just for training and then during runtime you only ever load the decimated model? Could this provide limitless principles?
https://intel-isl.github.io/PhotorealismEnhancement/
http://vladlen.info/papers/EPE.pdf
In Unreal Engine 5, gigapixel 3d scans can be imported directly and all detail loding is done automatically and no normal, displacement or other fakeit maps are needed. This is capable of at least 16 billion polygons in one object and they could quite easily render up to a million objects on screen, with 768MB of RAM at a GPU cost of 4.5ms, which is about 1/4  the timeframe budget at 30fps or 1/8 at 60fps. As opposed to culling we have now – occlusion culling and frustum culling most commonly, that hide the meshes that can't be seen and shows ones that can be – Nanite culls vertices and completely unloads them from memory and loads as needed. That's why it's possible only now that consoles come with fast SSD drives,
Brian Karis, the driving force behind Nanite, tweeted this thread, which contains those two blogposts:
http://graphicrants.blogspot.com/2009/01/more-geometry.html
http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html
Far as the research paper goes, it seems that Nanite is using some variation of this: https://pages.jh.edu/%7Edighamm/research/2004_01_sta.pdf
This is not the end all solution if it cant do deformation animation but there are useful technologies for LOD generation and data structures and splats for at least distant objects and perhaps static objects that can be swapped for dynamic ones on the fly at the time of editing/animating. Hybrid tech is key.
In Unreal Engine 5, non-triangle raytraced based version of bounce lighting. Lumen traces rays against a scene representation consisting of signed distance fields, voxels, and height fields. As a result, it requires no special ray tracing hardware. Screen-space traces handle tiny details, mesh signed distance field traces handle medium-scale light transfer and voxel traces handle large scale light transfer.
This is useful for CoOS because no objects or lights can be considered static so baking lights onto textures is not an option, which is often used to cheat good results.
In Star Citizen, every 20 frames a 360 render is done from various locations, blurred compressed and used as a lightmap. This can be hidden
With RTX/DirectX OptiX, Machine Learning is used to denoise an upscaled low resolution or low sample ray traced image to produce real time results.
Boundary Volume Hierarchy: BVH is used to make the whole ray-tracing process more efficient. Here, all the objects on the screen are enclosed in boxes of sorts and then the rays are cast. Only the boxes (read: the objects inside the boxes) that are intersected by the rays are considered for the rest of the process.
https://www.youtube.com/watch?v=JYR-GwQ7JUk&list=PLC5BE2A0BB7843021&index=12&ab_channel=LinX
Here’s a voxel based denoising, temporal filtering, and sphere marching. He uses rasterization combined with raytracing, raymarching and splatting. This is also how Dreams for PS5 works.
He then uses adaptive spacial temporal variance guided filtering for denoising.
https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/
https://www.youtube.com/watch?v=rLEbO0Vrzz4&ab_channel=NvidiaGameWorks
This talk shows the limits of raytracing.
https://www.youtube.com/c/EuclideonHolographics/videos
https://www.youtube.com/user/AtomontageEngine/videos
https://www.youtube.com/watch?v=i7vq-HY10hI&list=PLC5BE2A0BB7843021&index=3
https://www.youtube.com/channel/UCM2RhfMLoLqG24e_DYgTQeA/videos
https://www.youtube.com/watch?v=r0oRheSm0Rw&ab_channel=JohnLinJohnLin
        Environment probe for baking lighting onto procedural models
                6 angles of renders, process, blurred, compressed on the fly
                Work is split up/updated over 20 frames
        Shadow Pull
                8k shadow maps
                Varied quality on one map
        Depth Prepass
                Looking for good ocluders to exclude not from rendering
                Extra culling on cpu
        Material Layers
                Up to 20 textures: color, normal, smoothness, blend, height
        Z-Pass Normals
Render into G-Buffer: planet, instanced objects, static objects, dynamic objects, characters, decals.
        Z-Pass: Color
        Z-Pass: Reflectivity
        Z-Pass: Reflectivity Hue
        Z-Pass: Material Masks
Z-Pass: Material Properties such as Sub Surface Scatering
Z-Pass: Moton Vectors
        Z-Pass: Emissive
Atmosphere Precalculate
Sun Shadow Cascades
Shadow Mask
Applying up to 12 layers per pixel from Shadow Pull
Occlusion Direction Raw
Probe Angle
Occlusion Angle
        Cone Width
Screen Space Reflections
Opaque Surface Lighting Pass
        Combines all previous process into one image
        Add Area Lights
Sub Surface Scattering Input
Simulation
Composition
Fog: Density Injection
        Voxel fixed distance simulation
        Fog: Light Injection
        Fog: Composition
        Transparency
        Optics Downsampling
                Mipmaps
        Anomorphic Flare
        Lens Flare
        Optics Compositing
        Bloom
Color Chart Blend
        Color Correction
LUT Look Up Table
Exposure System
        Simulate Pupil Response (limited quick change)
        Simulate Rod Cone Response (20 seconds change)
SMAA Edge Detection
        Finds jagged edges
Classifieds left hand corner, diagonal, etc
Applies smart blur for that class
Temporal Antiailising Super Sampling
Render to Texture
        Holograms, UI, Etc
P4K System
SSD Streaming Path
Transferring multiple files at once
Do we even need image textures if each atom holds the properties?
Petri Purho has developed a Falling Everything Tech where each pixel is simulated.
Here you can see how they do this.
https://www.youtube.com/watch?v=aMJKNTmPxyY&ab_channel=GameSpot
What are the implications of general intelligence?
I hear people saying I'm not going to read that when I see a large block of text, or some people saying I don't read it all if it's not in pictures I just ignore it even if it's highlighted.
Could every tool be a contextual and conversational wizard rather than requiring browsing and reading?
Could Mods be obsolete from the user’s perspective as GPT3 can code on the users behalf.
Mods could provide an easier framework for GPT3 at minimum so it is still worth it to abstract instruction through mods
Here is a GPT2 testbed. https://app.inferkit.com/demo
Foviated rendering, eye tracking
Angle maximum needed resolution
120hz maximum needed frame rate
https://setapp.com/Â Think tasks, not apps.
We know it’s possible because countless examples of the technology already exist.
Space                Limitless                                        Unreal , Euclideon,
Space                Seamless                                        Space Engine, Megaton Rainfall, Outerra,
Space                Shared                                        Stadia, Gforce Now, Steam Link
Tools                Shape, Measure                                Rhino, Onshape, Blender, Main Assembly,
Tools                Material                                        Substance Designer
Mods                Visual Programming Language                Game Builder, Dreams, Project Spark
For more check out this ever growing list of comparable software technology.
If it exists, it can exist.
What is the path to living in a virtual universe?
This will likely be launched as a design software which will become a game and finally an operating system shell. It will start on cellphones as a game or pc as a design software and finally VR as its final form.
Cellphones
are a part of life so piggyback that platform while it lasts.
Android is the first target as an App.
Second priority is to be included installed as the OS with new phones.
It is possible for basically any phone made any later than 2020 to run the CoOS as it only needs the ability to stream renders from the server and to stream controls back.
Processing: Qualcomm Snapdragon 855 Chipset
Network: 5Ghz
Ram: 4 GB.
Rom: 2 GB
PC
PCs are a part of work life and are taken more seriously. If we develop for desktop game streaming, it can be more easily ported to other platforms.
VR
VR is the immersive ultimate platform. Though we need to wait on a turnkey untethered(no;computer,base station,wifi) hardware running fast enough for streaming which is a higher bar for renders and control streaming due to 0 latency tolerance and double the rendering requirements.
Stretch Goals
Retro
To test and push the principles to their limits
Any Display
Should be compatible so you could take over any display with your phone or account. In a sense CoOS ready displays are always located in CoOS so it's just a matter of controlling it's camera. This could replace Desktops.
Everyday devices
You should be able to eventually connect and control any modern device or use and object as a controller.
It just has to work enough to accomplish that phase’s goals.
A wizard AI responds to attacks on the system’s purity.
Panic Button        A button can be pressed to activate the self defensive routines
        Break up the server and content and encrypt
Hide online in public forums, comments, images...
Offsite server and content backups with unknown locations can go dormant, hibernating to protect themselves until it is safe to reactivate.
Pieces of server and content can be stored on client devices.
When it is safe these backups can be rebuilt, reassembled, reactivated.
Alternatively the server can go public and open source so everybody has it and nobody can control it.
Content Protection
        Files cannot be deleted permanently.
Content automatically backs up and if tampered with, will reinstall original files.
Files cannot be identified/targeted externally.
Files are encrypted and so can’t be associated with a particular object or tag from outside.
Downtime
If the server is interrupted clients can go into an offline mode rendering locally and panic button content can be dumped onto client as much as possible.
       Â
This content is based on a predicted priority tag assigned to content for each operator.
Any content they create or actions they perform will be recorded for reupload and integrating once the server is back up.
Inevitably we will want to import our existing work from past software;
PNG, AAC, MP4, PDF, SVG, OBJ, STP, DOCX, PY...
These don’t exist in the system intact as the original files. Instead they are converted;
Pixels to points, vectors to lines, objects to groups, scripts to mods...
The original could be archived for the purpose of being reconverted with later updated algorithms.
They can also be interpreted further to strip the render into the components;
Polygons become edge defined curves with dynamic topology, Videos become estimated filled layers, or parallax interpreted as 3D shapes...
But interpretation is a last resort as the ideal is to preserve the original data in the original work files;
MAX, BLEND, AI, IAM, SBS, AEP...
Once the CoOS becomes as ubiquitous as existing operations systems or design suites, export will be in high demand to take in space creation into physical space or other isolated software spaces. For these we should prioritise the richest feature quality options;
AVI, STP, PY...
You can then use external third-party convertors to get to the format you want.
MP4, GIF, OBJ...
How do you update a scripting language in a way that both frees update innovation, while preserving functionality of existing programs?
Is it enough to tag each instruction with the environment it was created in, so when that id is read it reacts in the matching archived environments? Like an ever growing library with older implementations simply being hidden from new users.
For deeper system changes the operation would need to be virtualized with an archived module of the matching environment virtually.
To clean up the system later, logically complete transpilers could be written to convert the instruction to a updated version and then discarded the old. If virtualizing is too intensive we may need to make this the rule for update development.
Logically complete means an instruction’s ports are understood complete enough to guarantee an alternate implementation of the supporting code would produce the same results.
I wonder how much work writing an logically complete transpiler would take worse case scenario. It would be better for performance than modular virtualization but if it resists innovation then it may not be worth it.
Study agile unit testing, and get it right as you go approach so the results are predicable.
Perhaps code can be written with an approach that makes replacing it a checklist
Design an axiom proof approach that provides a checklist that makes mistakes impossible.
What we are working on is an IDE and language that are in a classical sense the last program, so in other words, expensive.
I have considered what type of approach is possible with each developmental stage’s resources.
Spaghetti code would allow us to scrap together a prototype to get investments to fund further development.
Higher quality could be implemented as we prototype performance.
But an alpha can not be publicly available until a new conduct of code is established that provides flexability for both the Language developer and the Language user. So the code base is rewritten in under that conduct. Lest we trash the user’s work.
Creative Vision related
We should not make technical choices that tie the hands of creative people in order to free the hands of the system.
Often child objects use less resources because they're referring to the same geometry however I would like to not count fresh objects against people and not count repeated objects against people because that would encourage people to duplicate their objects through kit bashing and trash the universe.
Cooperating System, Page