Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
By the way you are asking, then I'm assuming Lua isn't your strong front and you haven't looked through the github, or not familiar with how a virtual camera works etc. so it's fair to be uncertain.
So I assume it wouldn't be straight forward for you, but at least look though the code and readme on the github and judge for yourself. You would just need to understand the Input/Output for the cameraTransform.lua and understand the Render.lua file to project 3d points to the screen.
There's an in game example and template folders to differentiate the core functions.
If you convert the radar target to a global/world 3d point (x,y,z), then the code is mostly forward with how to map a point to the screen.
I used a personal Lua library, which doesn't make it quite forward with plug and play of the functions, which is a little bad on my part, as you would need to look though that library for whatever is needed, or know the SW VSCode extension for using custom libraries.
This is also using matrices to optimize for thousands of points, which takes more space than if you would have just done some vector calculations, which I've seen others that done so, some time ago. I don't have that implementation at hand, but that would've been a lot easier for plug and play to project 3d points to the hud.
There is a section for head_position_offset (approximation), also variable/vector for the relative position at the seat headrest block to the player head (When holdning nothing in hand).
That local position vector (head_position_offset) only depends on the seat look direction (and gender of player, defaults to male).
Picture of local coordinate space origin at seat headrest block:
https://github.com/Jumper-44/Stormworks_AR-3D-Render/blob/master/Pictures/Local%20Coordinate%20Space.png
I think it would help a lot of technical players including myself.
I've updated the github repo cameraTransform.lua and README.md
aspectratio = w/h
It was used to quickly get the correct aspectratio of the screen to run the "Quick debug" that is commented out. The sizeX|Y values aren't correct to match the 9x5 screen in game, but just the right ratio so the result isn't warped.
That's my bad, I haven't yet explained what each property paremeters are yet. I'll do a edit in cameraTransform.lua and configure the README.md to the github repo.
head_position_offset origin, or relative point, is in the center of the seat headrest block.
The headrest block is also the block that "GPS_to_camera" offsets to.
I.e. the offset from the physics block to headrest block.
In CameraTransform.lua, you multiply "sizeX" by 1.8, why is this?
That would form a sort of arc of trajectory, but not a static bullet arc that falls towards the ground. Can be used to account for over/under shooting.
I think so, just thought of it.
█░░▀░░▀░░░░░▀▄▄░░█░█He needs your help.
█░▄░█▀░▄░░░░░░░▀▀░░█Copy/paste Toby to help him.
█░░▀▀▀▀░░░░░░░░░░░░█If not, he will use the Legendary Artifact's power. On you.
█░░░░░░░░░░░░░░░░░░█
█░░░░░░░░░░░░░░░░░░█
░█░░▄▄░░▄▄▄▄░░▄▄░░█░
░█░
█░░▀░░▀░░░░░▀▄▄░░█░█He needs your help.
█░▄░█▀░▄░░░░░░░▀▀░░█Copy/paste Toby to help him.
█░░▀▀▀▀░░░░░░░░░░░░█If not, he will use the Legendary Artifact's power. On you.
█░░░░░░░░░░░░░░░░░░█
█░░░░░░░░░░░░░░░░░░█
░█░░▄▄░░▄▄▄▄░░▄▄░░█░
░█░
█░░▀░░▀░░░░░▀▄▄░░█░█He needs your help.
█░▄░█▀░▄░░░░░░░▀▀░░█Copy/paste Toby to help him.
█░░▀▀▀▀░░░░░░░░░░░░█If not, he will use the Legendary Artifact's power. On you.
█░░░░░░░░░░░░░░░░░░█
█░░░░░░░░░░░░░░░░░░█
░█░░▄▄░░▄▄▄▄░░▄▄░░█░
░█░
You'd to use a radar/sonar if you want the point to follow, and maybe predict the future position of the target by 2-3 ticks for accuracy.
You could of course track a target with laser, but it'd be more complicated to remain locked or just finding the target without radar.
░█░░░░░░░░▀▄░░░░░░▄░Toby wants to take over Steam.
█░░▀░░▀░░░░░▀▄▄░░█░█He needs your help.
█░▄░█▀░▄░░░░░░░▀▀░░█Copy/paste Toby to help him.
█░░▀▀▀▀░░░░░░░░░░░░█If not, he will use the Legendary Artifact's power. On you.
█░░░░░░░░░░░░░░░░░░█
█░░░░░░░░░░░░░░░░░░█
░█░░▄▄░░▄▄▄▄░░▄▄░░█░
░█░
I used CheatEngine to read the GPS coordinates of the head in real time, so I have the data of the points which are only slightly off, just haven't updated the code since I started on my jet (F35 B used for reference), and little burned out, but I'll get to that in the near future and update this.
You can improve it and upload it however you like, if it becomes the case of interest, but if it's an extention/improved version, AKA just the main part of a system, then just a mention would be appreciated, else if just part of multiple systems in a vehicle, then it's fine without mention.
https://www.youtube.com/watch?v=EOxarwd3eTs
The camera in the video makes it so you can see things like you do in AR, but it's a little more cursed/ungodly/lovecraftian-ish (watch content on the SCP Foundation for more context).
It can be used for LIDAR, where the points can be colored depending on distance or height like if points have an altitude of under 0, then you can just clamp it to 0 and have it blue for water, or different color if the points are higher or lower than your vehicle altitude. That way it is easy to see what is land, mountain or sea.
With distance you can also instead of just color, use circles to make the point bigger if close.
This is mostly useful at night, foggy, heavy rain, so general bad weather. If you don't have have windows or very limited vision in vehicles, it can give space perception or submarines, where vision is very limited.
You can view any position(x, y, z) in 3d space to debug or whatever systems you can imagine. radar position. Outline Runways on land or custom aircraft carriers, outline helipads. Navigation guidance.
At the top of the code (take a look), there is a screen configuration section which should be easy to configure.
In this code example, there is just over onTick(), a section for variables used in the demo.
Also mentioned in onTick() what is used for the demo.
In onDraw(), the first section is for setting up the camera transform, and a line under saying when you can draw.
The function WorldToScreen_Point(table), takes a single table as argument containing:
{ {x, y, z}, {...}, ... }
It returns a table: { {x, y, z, index}, {...}, ... }, where x and y are screen coordinates,
z is depth and the forth is the index of the table used as argument, since points not in view are clipped.