Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Or just another general bug, but I'm more leaning towards the edge cases of robustness with floating points in the triangulation implementation.
At this point I don't really wanna bother trying to fix it, as it would be annoyingly hard for me to figure out, but more the stupid char limit, in which if a (~800 char) interpreter using string data existed, then maybe a rewrite to have it in 1 or 2 scripts, and try a more robust algorithm implementation.
Still, thx for pointing out that it crashes. Might fiddle with it again in the future.
Okay thx. Makes more sense that there is a general bug and not (altho maybe too, but could just be more are testing in multiplayer) a only multiplayer bug.
I do some hacky stuff of course to fit code in the 4096 char limit, so it is split into 4 scripts to work and some slight bit manipulation involving floats, to get more bandwidth between scripts, which I'm not entirely sure is 100% correct, so maybe some edge cases there.
The triangulation algorithm implementation I know is not 100% robust due to floating point error arithmatic/comparison, so there are some very unlikely edge cases there too.
(And the chosen algorithm needs a "super-triangle" that encapsulates all sampled points, which is just chosen to roughly cover the playable map, but that would fail too, if sampling data far out of the map)
I won't try to fix this in the near future as my focus lies elsewhere and not really interested in SW, also about to start Computer Science at Uni.
Mainly the stupid char limit that I wanna try to fix, which I've looked slightly into and figured an approuch that takes ~800 chars for an interpreter reading string custom bytecode, but it is the compiler for Lua -> custom interpreter/bytecode that's the hard part, so it halted, might try in the future.
It runs fine for me in singleplayer. Idk about multiplayer, never tried it there.
But won't work in caves.
So I have no formal way of learning it. It's mostly just being curious and doing some research. ChatGPT can usually be good at providing keywords to use to search when you have some vague idea of what you want but no knowledge of the scientific field.
I'm in last year of high school so I have been formally taught high school level math, but programming is self taught.
This project has been on going for about 1-2 years.
First got interest in virtual camera about 2-3 years ago, though in the beginning I lacked a lot of the mathematical understanding, so it was really slow progress at the start, as I tried to mash together the math I found hoping it would magically work, without knowing what did what, which btw bad idea for progress :P
But I've only been doing work when I've had a great amount of motivation, so it has just been a slowly growing progress.
Haven't fixed nor know the issue of failing in multiplayer. For now I won't do anything, but I'd ofc still appreciate error messages for future, if it persist (Updated again, so it should be on a different line). Or if other people encounter alike issue.
Made an example for multiple screens.
Have fixed the bug too.
Could also do multiple HUDs in a single MC, but would need to duplicate and edit some of the property text, as well as som string data in the scripts.
The easiest way would be to use 3 MC. So if you take out the laser system of the MC template into one by itself, which would control the laser pivot XY and calculate the XYZ points. Then you can take the composite signal out of the Lua script and out of the MC and into the 2 other MC systems, which would be the template without laser system. And input the laser XYZ composite into the 3*6 Read Composite boxes.
Though, this would duplicate the triangulation script for each renderer MC, which could also be in the script of laser system too, but would require a little, but not much more setup. But the runtime of the delaunay triangulation is barely noticeable, so shouldn't be that bad to duplicate, but not ideal, unless you want to independtly wipe data from each HUD system. Anyway I could make a template vehicle, that does this in the near future.
https://github.com/Jumper-44/Stormworks_AR-3D-Render?tab=readme-ov-file#in-game-microcontroller-property-paremeters (also just noticed I spelt parameters wrong.)
Additionally then:
MDT = Max Drawn Triangles; will cap the amount of rendered triangles, and if the buffer queue when frustum culling the quadtree exceed MDT, then it will only add every other triangle to buffer.
TBR = Triangle Buffer Refreshrate; amount of ticks before the buffer refreshes and doing triangle depth sort once too.
Max_T = Max Triangle Size Squared; the threshold for the max accepted size of a triangle, in which it is the 2D minimum enclosing circle radius of triangle, squared, so r², which is used as size comparison.
Min_D = Point Minimum Density Squared; the minimum threshold for the distance between nearest neighbor/point in point cloud (in 3D), which is used when trying to add new point.
If it is what I'm thinking, then it is because when I use integers to transfer triangle data via composite, then I use uint16 to double the sendrate, but it's not fully uint16 due to composite wanting float type, so it is only upto 65279 = 2^16 - 257.
I made an assumption with some of the logical part of the code on how to stop accepting new points when the amount of triangles get that high to save chars, but seems that assumption may be wrong. I really should fully test out my code for those edge cases, tho this one part, hence I clearly remember, I just forgot to test. I'll test it now to see if I can replicate it atleast, by scanning for a long time.
Tick_failure
171: attempt to perform arithmetic on a nil value (field '?' )
number composite read is 9, but is supposed to be 19. Currently the sequence for that point goes 18, 9, 20, but should be 18, 19, 20. So u can just update that for now. That fixed the issue on my side for 6 lasers, but now I'm about to look into the aforementioned issue of "buffer overload".
I made input for 6 lasers, but have only tested 3 (On this vehicle), so I should probably have tested all 6... (Haven't as of writing). So might be a bug. I'm assuming the artifacts are something wrong with the final rendered mesh (after stopped scanning and the "buffer has sent all data"), like wrongly overlapping triangles, so failed triangulation?
Or another issue, which I probably should've fixed by a quick condition check (I will now that u mentioned it):
Altho there are input for 6 lasers, which then outputs 6 sets of points (XYZ) and is inputted into a number composite, and then to the script.
The triangulation system only accept the first 2 valid points that tick and doesn't read the other potential points if 2 are already found this tick. Every inputted point is tested if too close to other points to control the max point cloud density, so more lasers would just make sure that it is more likely for there to constantly be 2 accepted points each tick.
Due to char limit, the whole synthetic vision system is split up into 3 lua scripts (without laser system), so the triangulation system needs to update the change of the triangle mesh to the render system, in which only 16 number and the bool channel are available for triangle data. Without going into how that is sent to keep it short (Will explain on github at some point), then the triangulation system, buffer up triangle data it needs to send via composite in a queue. If 2 points are accepted every tick then it would "overload" and the buffer queue would get more data than it can output, so that ends up as a big delay to the newest triangle data.
What I should've done and now will do is add a condition check for if the buffer queue is too big then don't accept new point.
I'm first going to update the readme on github (Currently dev branch) and then afterwards merge with main branch. Then update documentation on the workshop.
If I'm too slow (Not to say that u r rushing me in any way), then we can write on discord: jumper._
Just wanna be of little more help now that I've said I'd get it done in about a month and then 3 more months happened.
But as for your question to the old vehicle:
The input sensor data are in sync, i.e. laser distance and tilt/compass sensor. So the output to the pivots does not affect the point calculation and just to wiggle it around. And it is ~0.5 meters inaccurate or something like that.
Points are stored as global GPS coordinates.
At start of december I was trying to implement a bounding volume hierarchy for the renderer script, but quickly ran into char limit. Then shortly looked at a custom lua interpreter, which I didn't get far before school exams. And when u wrote this, I decided to just rewrite the existing implementation.
But I'm going to make some actual documentation this time.
First, for the OG sensor system are the data points collected by the laser stored as global GPS coordinates or something else? If so, which lines of code regard the input of those coordinates? Do the output from the Yaw and Pitch to the pivoting sensor also affect calculations or are just there to wiggle the sensor around?
Currently I've tried a basic adapter from the robotic pivots into composite and just keeping the tilt and compass on a seperate moving thing in sync with the gimbal as there are still several degrees of error between the two and as such cannot be used accurately.
https://steamhost.cn/steamcommunity_com/sharedfiles/filedetails/?id=2793934450 this more or less shows how the augmented reality / 3D rendere is done, such that in the end it is "easy" to project a 3D world coordinate onto the screen with AR. But I'm not gonna implement a system with any radar myself.
Right now I'm rewriting the system to reduce table usage and lessening the weight/load on the garbage collector and dynamic allocation. I've rewritten the calculation of the cameraTransform (4x4 matrix) and the 2.5D delaunay triangulation, but now working on the encoding/decoding of sending the delta triangle data. Think I found a way that would make 2 lasers viable at once, just need to implement it and test.
The rendering system also need a major overhaul. First for lessening GB load, but will also try and look into insertion bounding volume hierarchy (BVH), which should be better than QuadTree. Might also enable the option for looking into level of detail (LOD) of triangles, due to nature of BVH.