Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
XBox (Games) Graphics Microsoft

Titanfall Dev Claims Xbox One Doesn't Need DX12 To Improve Performance 117

MojoKid writes: "One of the hot topics in the wake of Titanfall's launch has been whether or not DirectX 12 would make a difference to the game's sometimes jerky framerate and lower-than-expected 792p resolution. According to Titanfall developer Jon Shirling, the new Microsoft API isn't needed to improve the game's performance, and updates coming down the pipe should improve Xbox One play in the near future. This confirms what many expected since DX12 was announced — the API may offer performance improvements in certain scenarios, but DX12 isn't a panacea for the Xbox One's lackluster performance compared to the PS4. It's an API that appears to mostly address scenarios where the CPU isn't able to keep the GPU fed due to draw call bottlenecks."
This discussion has been archived. No new comments can be posted.

Titanfall Dev Claims Xbox One Doesn't Need DX12 To Improve Performance

Comments Filter:
  • by Joce640k ( 829181 ) on Sunday April 13, 2014 @02:26AM (#46738407) Homepage

    If the framerate is jerky then they didn't plan the game properly.

    There's no excuse on a console where you know the exact resources available, right down to individual clock cycles.

  • by _Shorty-dammit ( 555739 ) on Sunday April 13, 2014 @03:04AM (#46738519)

    Only they're also known targets, and should be able to be easily programmed for, as a result. Performance for 1920x1080 shouldn't be an issue for any title on the hardware available. It boggles the mind at how poor these developers must be if they can't even target known hardware, console-style, and get good performance out of the thing. Average PC game devs don't seem to have any problem doing so on the PC, and that's a moving target. Why would any competent devs have a problem with a fixed target? They've got decent CPUs. They've got decent GPUs. They've got a decent amount of RAM. Yet they found a way to get horrible performance out of it. Send in the firing squad.

  • by guises ( 2423402 ) on Sunday April 13, 2014 @03:23AM (#46738551)
    It's not about releasing an underpowered console, it's about focusing on performance as a selling point. The Wii U can't do what either of them can graphically, but it's the only one I actually want. No DRM bullshit, no ads, no camera in my living room, the games are actually fun, off screen play... I'm getting a little sick of people treating this like it's a two horse race.
  • Yes and no (Score:4, Insightful)

    by Sycraft-fu ( 314770 ) on Sunday April 13, 2014 @04:52AM (#46738779)

    So they are a bit different, hardware wise. A big difference is unified memory. There is only one pool of memory which both the CPU and GPU access. That's makes sense since the CPU and GPU are also on the same silicon, but it is a difference in the way you program. Also in the case of the Xbone they decided to use DDR3 RAM, instead of GDDR5, which is a little slow for graphics operations, but the APU (what AMD calls the CPU/GPU combo chips) has 32MB of high speed embedded RAM on it to try and buffer for that.

    Ok so there are some differences. However that aside, why the problem with the target? Visual quality. Basically, a video card can only do so much in a given time period. It only can push so many pixels/texels, only run so many shaders, etc. So any time you add more visual flair, it takes up available power. There's no hard limit, no amount where it stops working, rather you have to choose what kind of performance you want.

    For example if I can render a scene with X polygons in 16ms then I can output that at 60fps. However it also means that I can render a scene of 2X polygons in about 33ms, or 30fps.

    So FPS is one tradeoff you can make. You don't have to render at 60fps, you can go lower and indeed console games often do 30fps. That means each frame can have more in it, because the hardware has longer to generate it.

    Another tradeoff is resolution. Particularly when you are talking texture related things, lowering the output resolution lowers the demand on the hardware and thus allows you to do more.

    So it is a tradeoff in what you think looks best. Ya, you can design a game that runs at 1080p60 solid. However it may not look as good overall as a game that runs at 720p30 because that game, despite being lower FPS and rez, has more detail in the scenes. It is a choice you have to make with limited hardware.

    On the PC, we often solve it by throwing more hardware at the problem, but you can't do that on a console.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...