Forgot your password?
XBox (Games) Graphics Microsoft

Titanfall Dev Claims Xbox One Doesn't Need DX12 To Improve Performance 117

Posted by Soulskill
from the ask-carmack-for-some-tips dept.
MojoKid writes: "One of the hot topics in the wake of Titanfall's launch has been whether or not DirectX 12 would make a difference to the game's sometimes jerky framerate and lower-than-expected 792p resolution. According to Titanfall developer Jon Shirling, the new Microsoft API isn't needed to improve the game's performance, and updates coming down the pipe should improve Xbox One play in the near future. This confirms what many expected since DX12 was announced — the API may offer performance improvements in certain scenarios, but DX12 isn't a panacea for the Xbox One's lackluster performance compared to the PS4. It's an API that appears to mostly address scenarios where the CPU isn't able to keep the GPU fed due to draw call bottlenecks."
This discussion has been archived. No new comments can be posted.

Titanfall Dev Claims Xbox One Doesn't Need DX12 To Improve Performance

Comments Filter:
  • by CTU (1844100) on Sunday April 13, 2014 @01:23AM (#46738397) Journal

    Xbone just sucks compared to the PS4 so it is no wonder the system can't run the game well.

    Well can't say I am upset with not having an xbone, if I really wanted this game, then I think PC would be better anyway with a decent video card at least :)

  • by JavaBear (9872) on Sunday April 13, 2014 @02:52AM (#46738663)

    MS pulled a fast one at E3, wehre they used high end PC's to demo the XBox One.
    IIRC MS later claimed that these were "representative" and also used for development. However, if these were the machines the devs were using to develop their game, it's no wonder they exceeded the available resources on the console. []

  • by Emetophobe (878584) on Sunday April 13, 2014 @03:50AM (#46738769)

    That BS. Microsoft and Sony fanboys mocked the Wii for targeting 720p. According to them they had all the games in glorious 1080p while Wii peasant didn't had real HD.

    Correction: The Wii was 480p [], not 720p.

  • The PS3 plays a lot of games at 1080p native...

    There is nothing wrong with the PS4/XB1, other than for $400/$500, they don't really offer anything new.

    PS1 was the first major 3D console, it was a massive improvement over the SNES.

    The PS2 offered DVD, vastly upgraded graphics, etc.

    The PS3 offered Blu-Ray, 1080p, and the first serious online console (from Sony).

    The PS4? Meh, it is a faster PS3, but otherwise, it doesn't offer anything new.

    Um...The PS3 renders very few games at 1080p native. Maybe a dozen titles out of the entire catalog.

    Don't forget the other dimension. 1080 is only 360 more than 720, but 1920 is over 800 more pixels than 1280. IMO, that's the dimension we should be talking about, since its more significant. However, per pixel calculation load scales with area, not 1/2 perimeter. So, if we look at total pixels: 1280x720p = 921,600 pixels, and 1920x1080p = 2,073,600, the difference being 1,152,000, so a lot of people don't understand that going from 720 to 1080 is MORE THAN TWICE the pixels, in pixel shader costs you might as well be rendering a full secondary screen.

    Now, that's not to say the total cost in rendering will absolutely increase over two fold. Full screen effects like Bloom or HDR are going to come it at about twice the cost. Interpolating a texture coordinate to look up pixel values is cheap compared to most any shader program, even to do something like cube-map specular highlight/reflections, bump mapping (I prefer parallax mapping), shadow mapping, or etc. However, the complexity of geometry calculations can be the same at both resolutions. In a ported / cross-platform game the geometry assets are rarely changed (too expensive in terms of re-rigging and all the animations, testing, etc.) so given slightly better hardware a game at the same resolution will have the prime difference be in adding more particle effects, increased draw distance, maybe even a few whole extra pixel sharers (perhaps the water looks way more realistic, or flesh looks fleshier, blood is bloodier, reflections are more realistic, etc.)

    Jumping up to 1080p makes your pixel shader cost a lot more frame time. Developing for 1080p vs 720p would optimally mean completely reworking the graphics and assets and shaders to adapt to the higher shader cost, maybe cut down on pixel shader effects and add more detailed geometry. I encounter folks who think "1080 isn't 'next gen', 4K would have been next gen" -- No, that's ridiculous. 1080p is "next gen resolution", but the new consoles are barely capable of it while having a significant degree of increase in shader complexity from last gen, and we're seeing diminishing returns on increasing the resolution anyway. So, I wouldn't call the consoles quite 'next-gen' in all areas. IMO, next gen console graphics would handle significantly more shaders while running everything smoothly at 1080p, just like the above average gaming PC I got my younger brother for his birthday which kicks both PS4 and Xbone's ass on those fronts. That would be the sort of leap in graphics scale between PS1 and PS2 or Xbox and the 360. 4K would be a generation beyond 'next-gen' because of the way shaders must scale with resolution.

    One of the main advances this new console generation brings is in the way memory is managed. Most people don't even understand this, including many gamedevs. Traditionally we have to had two copies of everything in RAM, one texture loaded from storage to main memory, and another copy stored in the GPU; Same goes for geometry, but sometimes even a third lower detail geometry will be stored in RAM for the physics engine to work on. The other copy in main RAM is kept ready to shove down the GPU pipeline, and the resource manager tracks which assets can be retired and which will be needed to prevent cache misses. That's a HUGE cost in total RAM. Traditionally this bus bandwidth has been a prime limitation in interactivity. Shader programs exist because we couldn't manipulate video RAM directly (they were the first step on the return to software rasterizer days, where the physics, logic and graphics could all interact freely). Shoving updates to the GPU is costly, but reading back any data from the GPU is insanely expensive. With shared memory architecture we don't have to keep that extra copy of the assets, so without an increase in CPU/GPU speed just full shared memory by itself would practically double the amount geometry and detail the GPU could handle. The GPU could directly use what's in memory and the CPU can manipulate some GPU memory directly. It means we can compute stuff on the GPU and then readily use it to influence game logic, or vise versa, without paying a heavy penalty in frame time. The advance in heterogeneous computing should be amazing, if anyone knew what to do with it.

    Ultimately I'd like to put the whole damn game in the GPU, it's not too hard on traditional memory model hardware (well, it's insane but not impossible): You can keep all the gamestate and logic in buffers on the GPU and bounce between two state buffer objects using shaders to compute physics and update the buffer as input for the next physics and render pass; Pass in a few vectors to the programs for control / input. I've even done this with render to texture but debugging VMs made of rainbow colored noise is a bitch. The problem is that controller input, drives, and the NIC aren't available to the GPU directly so I can't really make a networked game that streams assets from storage completely in the GPU alone, there has to be an interface and that means CPU feeding data it and reading data out across the bus, and that's slow for any moderate size of state I'd want to sync. At least with everything GPU bound I can make particle physics interact with not just static geometry, but dynamic geometry, AND even game logic: I can have each fire particle be able to spawn more fire emitters if they touch a burnable thing right on the GPU and make that fire damage the players and dynamic entities; I can even have enemy AI reacting to the state updates without a round trip to the CPU if their logic runs completely on the GPU... With CPU side logic that's not possible, the traditional route of read-back is too slow, so we have particles going through walls, and use something like "tracer-rounds", a few particles (if any) on the CPU to interact with the physics and game logic. With the shared memory model architecture more of this becomes possible. The GPU can do calculations on memory that the CPU can read and apply to game logic without the bus bottle neck; CPU can change some memory to provide input into the GPU without shoving it across a bus. The XBone and PS4 stand to yield a whole new type of interaction to games, but it will require a whole new type of engine to leverage the new memory model. It may even require new types of game. "New memory architecture! New type of games are possible!" Compared with GP: "Meh, it is a faster PS3, but otherwise it doesn't offer anything new." . . . wat?

    As a cyberneticist, all these folks wanking over graphics make me cry. The AI team is allowed 1%, or maybe 2% of the budget. All those parallel Flops! And they're just going to PISS THEM AWAY instead of putting in actual machine intelligence that can be yield more dynamism or even learn and adapt as the game is played? You return to town and the lady pushing the wheelbarrow is pushing that SAME wheelbarrow the same way. They guy chopping wood just keeps chopping that wood forever: Beat the boss, come back, still chopping that damn wood! WHAT WAS THE POINT OF WINNING? The games are all lying to you! They tell you, "Hero! Come and change the world!", and now you've won proceed to game over. Where's the bloody change!? Everything just stays pretty much the same!? Start it up again, you get the same game world? Game "AI" has long been a joke, it's nothing like actual machine learning. It's an indication of a Noob gamedev when they claim their AI will learn using neural networks, and we'd all just laugh or nod our heads knowingly, but I can actually do that now, for real, on the current and this new generation of existing hardware... If the AI team is allowed the budget.

    A game is not graphics. A game is primarily rules of interaction, without them you have a movie. Todays AAA games are closer to being movies than games. Look at board games or card games like Magic the Gathering -- It a basic set of rules and some cards that a add a massive variety of completely new rules to the game mechanics so the game is different every time you play. I'm not saying make card games. I'm saying that mechanics (interaction between players, the simulation and the rules) is what a game is. Physics is a rule set for simulating, fine, you can make physics games and play within simulations, but a simulation itself isn't really a game, at the very least a world's geometry dictates how you can interact with the physics. Weapons and some spells, item effects, etc. things might futz with the physics system, but it is very rare to see a game that layers on rules dynamically during the course of play in a real-time 3D the way that paper and dice RPGs or even simple card games do. League of Legends does a very job of adding new abilities that have game changing ramifications and the dynamic is great because of it, but that's a rare example and is still not as deep as simple card games like MtG. It's such a waste, because we have the ram and processing power to do such things, but we're just not using it.

    I love a great stories, but it looks like all the big-time studios are fixated on only making these interactive movies to the exclusion of what even makes a game a game: The interaction with various rule logic. AAA games are stagnant in my opinion, it's like I'm playing the same game with a different skin, maybe a different combination of the same tired mechanics. The asset costs and casting, scripts, etc. prevent studios from really leveraging the amazing new dynamics and logic detail that are available in this generation of hardware, let alone next-gen hardware with shared memory architectures. IMO, most AAA studios don't need truly next-gen hardware because they don't know what the fuck to do with it -- Mostly because they've been using other people's engines for decades. These 'next-gen' consoles ARE next gen in terms of the game advancement they enable, even rivaling PCs in that regard, but no one is showing them off. I hope that changes. Most folks are scratching their head and asking, "How do I push more pixels with all this low latency RAM?" and forgetting that pixels make movie effects, not games. I mean, I can run my embarrassingly parallel hive on this hardware, and give every enemy and NPC its own varied personality where the interactions with and between them become more deep and nuanced than Dwarf Fortress, and the towns and scenarios and physics interactions more realistic, or whimsical, or yield cascades of chaotic complexity... but... Dem not nxtGen, cuz MUH PIXZELS!!1!!1

    The enemies and NPCs in your games are fucking idiots because "AI" and rules are what games are made of, and the AI team is starving to death while watching everyone else gorge themselves at the graphics feast. It's ridiculous. It's also pointless. So what if you can play Generic Army Shooter v42 with more realistic grass? Yeah, it's nice to have new shooters to play, but you're not getting the massive leap in gameplay. You could be protecting the guys who are rigging a building to crush the enemies as you retreat and cut off their supply lines. No, the level of dynamism in a FPS today is barely above that of a team of self-interested sharp shooters honing their bullseye ability. It's boring to me, great, I'm awesome at shooting while running now. So fucking what. Protip: that's why adding vehicles was such a big deal in FPSs -- That was a leap in game mechanics and rules. I'm picking on FPS, but I can leverage the same at any genre: There's little in the way of basic cooperative strategy (cooperative doesn't have to mean with other players, instead of re-spawning why not switch between bodies of a team having them intuitively carry out the task you initiate when not in the body anymore). We barely have any moderate complexity available in strategy itself let alone the manipulation of new game rules on the fly for tactical, logistical, or psychological warfare. How many pixels does it take to cut off a supply line, or flank your enemies?

Maternity pay? Now every Tom, Dick and Harry will get pregnant. -- Malcolm Smith