Jump to content

saberhawk

Member
  • Posts

    187
  • Joined

  • Last visited

  • Days Won

    2
  • Donations

    0.00 USD 

Posts posted by saberhawk

  1. On 4/5/2018 at 8:27 AM, Raap said:

    A new W3D asset plugin would be great as well.

    A new asset plugin is somewhat required.

    On 4/5/2018 at 8:27 AM, Raap said:

    Saberhawk was dreaming big, too big perhaps, when he intended to effectively change how materials and meshes are set up. Sure, we need additional material features such as normal mapping and so on, but surely this can be achieved by updating the tools and engine separately rather than making some giant unwieldy mega system akin to current-era, massively funded, engines? The only merit I saw in his proposals, was making sure the engine didn't call materials of the same name more than once, which would mean materials would probably best be made in a universal collection, stored somewhere, and referenced through the 3D modelling plugin.

    Let's dig in on the engine details a bit before we call things giant unwieldy mega systems with no merit, shall we?

    On 4/5/2018 at 9:34 AM, Raap said:

    My main concern with W3D, as always, is the network code and specifically the infantry component, as well as the severe limitations of how W3D handles infantry meshes.

    So, skinned mesh rendering? The engine originally shipped with CPU-side rigid (i.e. one bone) skinning. It was stuck at this for a while because the contemporary exporter only supported rigid skinning, which doesn't work great for actual skin. The BFME version of the exporter added support for two-bone soft skinning by duplicating the rigid skinning system and storing an extra set of vertices optimized for rigid skinning along with the extra bone IDs and weights required. This is incredibly memory bandwidth and space intensive and doesn't scale very well to soft skinning with more than two bones. It does look better though, and the engine was updated to read and use this "new" type of skinning data.

    On 4/6/2018 at 9:34 AM, OWA said:

    So as far as I'm aware, W3D looks for skinned meshes within the file format and WWSkin is used to translate skinned weights into the W3D file format. So if the Max Skin Modifier is able to be exported and translated to skinned weights within the W3D file format, we eliminate the need for WWSkin. This does open up W3D for motion capture, CAT rigging, Inverse Kinematics and all of that good stuff. :)  Basically the key is how the skinning data is exported to the W3D file format rather than how the engine interprets it.

    I could be totally wrong, though.

    You are. The purpose of WWskin was to make the content in the content authoring tool behave like content in the engine does. Things like inverse kinematics (i.e. feet following terrain) require engine level support when not faked via baked hierarchy transform animations. Which is what mocap data generally is, until you get into facecap, and already supported by the engine.

    4 hours ago, cfehunter said:

    It depends on the workload. For renegade GPU skinning is probably the right choice as the GPU isn't the performance bottleneck and most skinned objects are using common skeletons.

    We can always move to CPU skinning, but our min-spec is a single core CPU and in that particular case it would absolutely murder the frame rate.

    k. PS: Whatever you do, don't log how long MeshModelClass::get_deformed_vertices(Vector3* dst_vert, Vector3* dst_norm, uint32 stride, const HTreeClass* htree) takes as it hits the slow weighted skinning case many times each frame in a multiplayer match. 

    2 hours ago, Raap said:

    Firstly, could you suit up both the plugin and editor with as much informative tooltips as possible? If you want to attract new contributors to W3D projects, it would help if they knew what the various material settings did. Good tooltips often include examples such as saying "enabling this causes X and disabling this causes Y". Maybe make a pass on obsolete features as well. AFAIK "shininess" and "translucency" both do nothing, and in the shaders/texture tabs there is a trainload of settings with dubious purposes.

    Getting rid of confusing features like the W3D fixed function emulation übershader generator would really help.

    1 hour ago, cfehunter said:

    Shininess is supposed to impact specular intensity, but when the engine was moved over to DX9 with a DX8 emulation shader, instead of being actually DX8, the specular was broken and it hasn't been fixed since.

    Translucency likewise actually should work, but it'll only effect specific material types.

    An audit was performed on a large library of .w3d files and per-vertex specular lighting wasn't found, so I didn't bother to both create test cases and implement accurate emulation of those test cases to prevent the übershader from exploding much more than it already had. 

    23 minutes ago, Raap said:

    Having such things fixed and better documented would help, like what types of materials are affected by translucency? Not even I know that and I worked with this engine for literally over a decade.

    I wish I knew what all the various material settings really did. The options in the exporter GUI don't always match up with the data contained in exported files. Old exported files need to keep working, they don't just disappear. Exported files contain lots of features not exposed by the exporter, such as alternative materials. The rendering system runtime data is very different than the stored data. For example, all texture coordinates are stored upside down and need to be flipped. Why? Because Renegade originally used SurRender 3D which followed OpenGL math conventions instead of Direct3D ones. These quirks in the file format can't really be fixed without updating all the required related tools, like the level editor. Which we couldn't really update because the GUI used MFC6 which is now two decades obsolete and incompatible with newer compilers. The engine also performs plenty of fixups for "oh noes, we exported the file but the settings are wrong and we can't export a new version" and fog.

    After all of that, the post-processed mesh finally makes it to the step of generating #defines for the shaders that it needs to compile to render triangles with that particular combination of settings. This happens with the source code found here. https://gist.github.com/saberhawk/199235c38a2b1051f06f0340cec9b5b4#file-ff_generator-cpp-L22. The rest of the gist contains the übershader code. There are some optimizations to prevent compiling a particular shader twice and to load compiled shaders instead of compiling them as required, but there are still a ridiculous number of combinations. 

    And then the actual D3D11 rendering engine just uses whichever shader (i.e. compiled .fx code, not the "shader" in the W3D exporter) and a combination of constant buffers and textures, some static and some dynamic. Which is so much more powerful than the limited options from the exporter, and what those options eventually end up in anyways so they can work. Is it really that much of a dream for tools to just use that directly instead of maintaining the fixed function emulation übershader generator?

  2. A fair question, we don't know yet. The game should have created crashdump files and stored them at %USERPROFILE%\documents\W3D Hub\games\apb-release\debug. If you compress (zip/rar/7z/bz2/whatever) the most recent crashdump files and upload them here we can take a look.

  3.  

     

    I noticed that changing game settings seems to not affect the game at all. I think someone else mentioned something about it as well.

    Changing settings in engine.cfg or through the launcher?

     

    Both apparently. I tried what saberhawk suggested and it didn't work.

     

    engine.cfg changes definitely affect the game. Is it possible that you are using an older version of Bandicam that doesn't support capturing D3D11 games?

  4.  

    I am just asking if VR is simply getting hardware plugged into an existing FPS game, or does there need to be significant software in the game to complement it. If it's the latter, of course it won't be doable.

    It is the latter. While it would be a neat feature, NVIDIA and Oculus have both released SDK's in order to help developers create new games for the HTC Vive and Oculus Rift. From what I've seen, the Oculus rift also takes advantage of motion controls as well. That would mean an entirely new animation system, as the current one is so rigid it requires button presses to change animations. While I don't doubt the skill of the guys at tiberian technologies, I seriously doubt a complete, VR-ready animation system is in the works or even in the scope of their scripting knowledge. And if it was to favor the HTC Vive as opposed to the OR, you would want to make sure that the motion controllers could give feedback if used. Shit, I'd like to see a limited, adjustable aiming deadzone put in place (not that I would use it, but it's a nice feature).

     

    Long story short, while definitely possible, it would take too much development time. Without some sort of grant from EA to start working it, there would be no reason to do it.

     

     

    What most people think about modding and "scripting" with this engine is wrong, especially when it comes to recent iterations ("scripts 5.0"). Yeah, there's probably a feature where a button press will play an animation (on what?) in a networked game. Some people use it. It's far from the actual animation system capabilities, it's an old "script" using a hacky interface put in place about a decade ago. Before we, you know, added a decade's worth of code and all the man-years of previous thought in the designs we cloned. Like all the one the ones required for defining the concept of a "script" in the first place, the input system to detect that button press, the netcode to send it to the server and synchronize clients, and the animation system to actually play it by sampling animation curves and feeding the also cloned/rewritten/rewritten again/and again rendering engine. All of these with many of our original hacks in place for backwards compatibility with ourselves and some other closed source projects and shared between a huge number of mods. Things are "laggy" because too much gameplay is written literally in scripts.dll in "script" form which is all server side scripting. Things are "ugly" because the tools suck and can't feed the engine better data, and I'm working as fast as I can to make better ones. I wish people would/could help, but they usually don't so I prioritize whatever I feel like doing. Taking a break from thinking about VR systems (which is part of my day job) and working on a simple 3D engine is supposed to be relaxing.

     

    Does VR actually have much of a future for gaming, or is it just more of a gimmicky "look at the cool stuff we can do" thing?

    Always, but it's going to be much broader than what is traditionally thought of as gaming and adding in other forms of entertainment and communication. It's also incredibly useful when it comes to jobs that deal with designing physical objects. Designs for "things" are growing increasingly virtual and the ability to visualize and manipulate those designs directly and easily is incredible.

  5. That would require a lot of work, especially considering you'd have to separate the camera from the player character direction. Typically that's only reserved for sims. It'd be a challenge just to put in head-tracking support.

     

    It's not much work to remove the one line or so of code that updates the camera direction at all. Using head tracking is slightly more complicated because nobody seems to agree on directions; is +Z up, into the screen, or out of the screen?

     

    What's actually complicated is getting the engine to render twice as much content to a 2560x1200 screen at 90FPS. Failure to maintain frame rate could get people physically sick. Sudden camera movements need to be avoided, otherwise people get physically sick. Each eye uses a slightly different perspective, so any tricks that assume a single perspective need to be fixed, otherwise they will look weird and could also make people sick. A few things that immediately come to mind are certain particle systems, "billboards" (camera aligned objects), and the sky actually being a tiny sphere that's shrinkwrapped to the camera and nowhere close to where it should be physically.

     

    After people can finally stay inside the virtuality for more than a few minutes, then comes the matter of controls and redesigning the HUD. The old trick of "put it in the corners" doesn't work anymore, as those are almost entirely outside your field of view and very difficult to see. Depth contrast also needs to be introduced because it's a lot more effective than color contrast. The trickiest thing will likely be compelling and fair gameplay for VR players vs non-VR players. A 24 player VR-only match would be cool, but that's years away realistically.

     

    It's almost guaranteed it'd be too much work for almost no return, for a number of reasons. Especially since, what percentage of the players of APB are gonna even have VR capable computers any time soon?

     

    This only holds true if you assume the desired return is something like X new players or Y more sales because of that VR mode. It's very likely that there are only a few players with VR capable computers, and APB is a free project. But it's run by volunteers. Some of which might see the experience of adding support for emerging VR technology to an existing game engine as rather valuable, even if only a handful of other people will see it at first. :3

     

    I am just asking if VR is simply getting hardware plugged into an existing FPS game, or does there need to be significant software in the game to complement it. If it's the latter, of course it won't be doable.

     

    Kinda. SteamVR has something called "Desktop Theater" which simulates a large screen for you to play games with. Potentially IMAX sized, pretty epic. That's probably the closest that you can get to running any existing game in VR and it's likely that it'll support a fake VR-like mode for games already supporting stereoscopic 3D rendering. But you can only count on it for the HTC Vive right now since it's still very early in the VR wars; it's unclear if Oculus will support SteamVR.

  6. I remember seeing a few renegade maps where a "ship" (geometry, sometimes setup as a Building) was one team's base. Maybe an objective map where a cruiser building acts as the allied team spawnpoint and spawns ships and helis for an amphibious assault on a soviet base?

    A mission in the SP campaign was Havoc infiltrating a Nod ship.

  7. correct me if i'm wrong, but wouldn't it be possible to fix the buggy icon appearance by making the w3dhub launcher scale the font size with the set resolution by making the launcher edit the stylemgr.ini file as the user sets a new resolution?

     

    that way we can ensure that the icons wont look messy on certain resolutions.

    If anybody tries that, I'll change the code reading/using stylemgr.ini to break compatibility. Because that's the better place to change it in the first place.

  8. Hi,

    i got problem when starting TS reborn, i get inner error message.

    I find out that game use multiple back buffers, which causes crash and the error message.

    So i have question is there way to configure game to use standard double buffer?

    Thanks

    This is news to me since I specifically avoided using simultaneous multiple render targets due to lack of hardware adoption. Which Intel GPU are you using?

  9. For a 9x-capable engine, that's ridiculous. Must be that the Scripts team just wants the newest VS builds just so they can say it's the newest (as it's not actually needed for the W3D engine). VS 2012 Update 1 has XP target support, and I'd love to see how many things one can do with that rather than VS 2026.

    Thanks for all the code you've contributed over the years. I thought you would have noticed by now that the 5.0 codebase uses features like non-static data member initializers, variadic templates, static_assert, delegating constructors, and deleted functions; all of which are new to VS2013, still work with the XP toolchain, and are rather useful. Something else used that's incredibly useful is D3D9Ex; it simplifies video resource handling significantly by not requiring you to keep track of all resources to recreate them from scratch on alt-tab. Unfortunately that requires an operating system written in the past decade, go figure.

  10. Considering that code compiling significantly improved over the last two decades, I could see it running even faster thanks to modern efficiency.

    This. Have often seen performance improvements just by cloning old code from optimized Renegade binaries.

  11. currently as in, could be fixed or changed otherwise? :)

    Unfortunately GDI command insists on using outdated fire-control systems from 2002 even though it's 2030 and technology has advanced since then. We are trying to convince them otherwise, but these sort of changes take time.

    ;)
  12. I consider FPS issues to be things like performing significantly worse than other games that look much better on the same hardware. Reborn has a ton of them. As for core system specs, I have a Core i7 3930k, 2x GTX 670, 16GB of RAM, and a 256GB Vertex 4. Those easily come up to $2500 worth of hardware, and it's not counting things like the case, monitors (3x 24" Dell U2410) or other internal hardware (like the other 6 drives, RAID controller, and sound card.)

  13. What I'm saying is to discuss the original ideas with the programming team instead of ones that were already shoehorned into perceived engine limitations. They will invariably know more about current engine capabilities and how to expand them or work around them appropriately.

  14. This is exactly what I'm talking about. Rather than actually stating the problem ("As a player using binoculars, I would like to mark targets for my team members"), everybody jumps right into various solutions that draw upon previous limitations that haven't been around for a while. :(

     

    For this particular problem, we have significant (if not full) control over the radar code, the stealth code, the weapon code, the "defense object" code (which provides health/armor), and much more. It's always better if we don't abuse existing gameplay systems. When we've done it before the features turned out very hacky, buggy, laggy, and overall not a good experience.

  15.  

     

    A script that would spawn a mark on the minimap when the warhead it is attached to collides with something would be probably quite useful for many things. In this case just give the binoculars a hitscan primary fire with an invisible warhead with that script.

    well you can't attach scripts to warheads that's one drawback of the engine

     

     

    The biggest drawback to this engine is people believing they understand the drawbacks of this engine and designing gameplay around those perceived drawbacks which probably aren't really drawbacks or limitations at all.
×
×
  • Create New...