Unnecessary since b93b7ec. It was needed before that commit becase
RenderBase.cpp only was checking the value of aspect_mode, not
suggested_aspect_mode.
This way we don't end up with artifacts of the EFB's alpha values in
frame dumps. XFB copies loaded from RAM also set the alpha to 1, so this
will match.
Due to the current design, any of the GL state can be mutated after
calling this function, so we can't assume that the tracked state will
match if we call SetPipeline() after ResetAPIState().
Since we use the common pipelines here and draw vertices if a batch is
currently being built by the vertex loader, we end up trampling over its
pointer, as we share the buffer with the loader, and it has not been
unmapped yet. Force a pipeline flush to avoid this.
It doesn't feel great to let the value from a previous emulation session
linger around considering that the GC aspect ratio heuristic can use
the previous value of m_aspect_wide when calculating m_aspect_wide.
The Metal shader compiler fails to compile the atomic instructions
when operating on individual components of a vector. Spltting it
into four variables shouldn't make any difference for other
platforms, as they are accessed independently.
The path to the MoltenVK library can be specified by the
LIBMOLTENVK_PATH environment variable, otherwise it assumes it is
located in the application bundle's Contents/MacOS directory.
floatindex is clamped to the range [0, 9]. For non-negative numbers
floor() is equivalent to trunc(). Truncation happens implicitly when
converting to uint, so the floor() is unnecessary.
This avoids out-of-bounds warnings when replaying FIFO captures.
The value of XF_REGS_SIZE is written into the DFF header and we only
read the min of XF_REGS_SIZE and the header value, so this change is
backward compatible and doesn't break forward compatibility for old
Dolphin versions either.
The current approach results in the UI thread creating a graphics device
whilst the core is running, leading to races on function pointers, and
potentially crashing.
This fixes severe image flickering in some cutscenes of Twin Snakes. The game appears to sometimes load a previously made XFB copy as a texture before it is actually rendered to the screen, which we took as an invitation to invalidate the XFB copy.
If, for whatever reason, the XFB has to be loaded from console memory, it's possible that the texture is returned at native resolution instead of EFB-scaled resolution. In this case, our xfb_rect.right adjustment must also happen at native resolution instead of scaled resolution.
This was causing a warning in the shader compiler, as the rgb components
were not initialized. Which shouldn't be an issue, as the rgb is not
used in the blend equation, only the alpha. However, the lack of
initialization causes crashes in Intel's D3D shader compiler, so we'll
play nice and initialize all the channels.
Our usage of glFinish() can cause driver crashes and/or lockups.
Please note that this disables the background shader compilation (i.e.
all shaders will be compiled on boot). There is no way around this.
AVFormatContext::filename was deprecated in lavf 58.7.100 in favor
of AVFormatContext::url. Instead of adding version-checking logic,
just use the passed-in dump path instead.
We want this setting to invalidate the cache because it may affect the appearance of textures in the rendered scene, therefore one would expect changing it while the game is running to have the expected effect immediately.
This no longer converts from sRGB to linear for the reference mip
downsample - even if the original mipmap creation tool used an sRGB
colorspace (which isn't really guaranteed, and may even change per
game), this is a "fast" heuristic that's only an estimate anyway.
The average diff is also now stored in a u64, avoiding floating point
calculations in the per-pixel hot loop.
This should speed up the detection significantly, hopefully fixing
jank when loading in new textures.
Makes the values strongly-typed and gets more identifiers out of the
global namespace.
We are forced to use anything that is not "None" to mean none, because
X11 is garbage in that it has:
\#define None 0L
Because clearly no one else will ever want to use that identifier for
anything in their own code (and is why you should prefix literally
any and all preprocessor macros you expose to library users in public
headers).
This is much better as prefixed double underscores are reserved for the
implementation when it comes to identifiers. Another reason its better,
is that, on Windows, where __forceinline is a compiler built-in, with
the previous define, header inclusion software that detects unnecessary
includes will erroneously flag usages of Compiler.h as unnecessary
(despite being necessary on other platforms). So we define a macro
that's used by Windows and other platforms to ensure this doesn't
happen.
Instead of globbing things under an ambiguous Common.h header, move
compiler-specifics over to Compiler.h. This gives us a dedicated home
for anything related to compilers that we want to make functional across
all compilers that we support.
This moves us a little closer to eliminating Common.h entirely.
Fairly trivial to resolve, we just initialize the std::array with two
sets of braces (one set to create the array, the other to start and end the
aggregate data that we'll end up returning)
Coherent mappings have a lower overhead and less GL codes.
So enables coherent mapping by default for all drivers.
Both Qualcomm and ARM performs very bad with explicit flushing, so this change helps them as well.
AFAIK there was one GPU generation which was slower on coherent mapping: nvidia tesla
So Geforce 200 and 300 series should be tested with this PR before merging.
As this was last tested many years ago, this issue might have been fixed as well.
Those GPUs are close to 10 years old and not supported any more by nvidia.
D3D11 cannot handle block compressed textures where the first mip level
is not a multiple of the block size. The simple fix for texture pack
authors: leave these textures uncompressed. You can still use a .dds
container.
Using 8-bit integer math here lead to precision loss for depth copies,
which broke various effects in games, e.g. lens flare in MK:DD.
It's unlikely the console implements this as a floating-point multiply
(fixed-point perhaps), but since we have the float round trip in our
EFB2RAM shaders anyway, it's not going to make things any worse. If we
do rewrite our shaders to use integer math completely, then it might be
worth switching this conversion back to integers.
However, the range of the values (format) should be known, or we should
expand all values out to 24-bits first.
Also ensure that all members of the class are initialized on
construction as well. Previously the bool indicating if options are
dirty wouldn't be initialized, which could be read uninitialized if an
instance was constructed and then IsDirty() is called.
Given this is a base class, we should clearly state what the parameters
to the functions in its exposed interface actually mean or represent.
This avoids needing to hunt for the definition of the functions in cpp
files.
While we're at it, normalize said parameter names so they follow our
naming guidelines.
Hopefully this better matches the user's view of a texture - as large changes in
colour should be weighted higher than lots of very small changes
Note: This likely invalidates the current heuristic threshold default
This makes it possible to use enums as the config type.
Default values are now clearer and there's no need for casts
when calling Config::Get/Set anymore.
In order to add support for enums, the common code was updated to
handle enums by using the underlying type when loading/saving settings.
A copy constructor is also provided for conversions from
`ConfigInfo<Enum>` to `ConfigInfo<underlying_type<Enum>>`
so that enum settings can still easily work with code that doesn't care
about the actual enum values (like Graphics{Choice,Radio} in DolphinQt2
which only treat the setting as an integer).
Also move it to MathUtils where it belongs with the rest of the
power-of-two functions. This gets rid of pollution of the current scope
of any translation unit with b<value> macros that aren't intended to be
used directly.
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
Prior to this change, it's possible for m_wake_me_up_again to be used
while it's in an uninitialized state from the exposed API.
e.g.
- Using SetEnable after construction would perform an uninitialized read.
- Using PushEvent would perform an uninitialized read by way of operator |=.
internally, an uninitialized read can happen if PullEventsInternal() is
executed before other functions.
Just to avoid the whole possibility of performing uninitialized reads,
we just give the class member a default value of false.
This way, if we load a UID cache where the data was incomplete (e.g.
Dolphin crashed), we don't lose the existing UIDs which were previously
at the beginning.
This will create a merge conflict if two PRs try to increment the
cache version at the same time, which makes it noticeable that the
PR that gets merged last needs to increment the cache version again.
We already use this for savestates and the game list cache.
Minimizes repetition.
std::minmax_element can be used for the 256 * 2 case, as it's only performing byte comparisons
and thus, there will always be an element smaller than 0xffff, so it doesn't need to be included
in the set of compared values.
Skip ubershader mode works the same as hybrid ubershaders in that the
shaders are compiled asynchronously. However, instead of using the
ubershader to draw the object, it skips it entirely until the
specialized shader is made available.
This mode will likely result in broken effects where a game creates an
EFB copy, and does not redraw it every frame. Therefore, it is not a
recommended option, however, it may result in better performance on
low-end systems.
This wouldn't be much of a data reader if it can't access the
read-only data pointer in read-only contexts. Especially if it
can get a writable equivalent in contexts that aren't read-only.
It's questionable to not return a reference to the instance being
assigned to. It's also quite misleading in terms of expected behavior
relative to everything else. This fixes it to make it consistent with
other classes.
Allows the default constructor to be defaulted and ensures the default
values are associated with the member variables directly.
Also corrects a prefixed underscore in the two parameter constructor.
Given that this only contains functions from the VideoBackendBase class,
it makes more sense to move these to the relevant cpp file to keep them
all together.
This is only ever memset to zero and never used again.
This also gets rid of an instance of undefined behavior considering the
draft standard for C++17 (N4659) states at [dcl.type.cv] paragraph 5:
"
The semantics of an access through a volatile glvalue are implementation-defined.
If an attempt is made to access an object defined with a volatile-qualified type
through the use of a non-volatile glvalue, the behavior is undefined.
"
Currently, when immediately compile shaders is not enabled, the
ubershaders will be placed before any specialized shaders in the compile
queue in hybrid ubershaders mode. This means that Dolphin could
potentially use the ubershaders for a longer time than it would have if
we blocked startup until all shaders were compiled, leading to a drop in
performance.
- In D3D, shaders could be compiled on the main thread, blocking
startup.
- Reduced the latency between a pipeline being requested and used in all
backends in hybrid ubershader mode, when no shader stages were present.
- Fixed a case where async compilation could cause the same UID to be
appended multiple times to the UID cache.
- Fix incorrect number of threads being used when immediately compile
shaders was enabled.
Lowest hanging fruit I could find with a profiler.
Not sure this stuff actually needs to be done, but assuming it is, why
not do it quickly? 10x faster, goes from 1% CPU to 0.09%.
This enables shaders to be compiled while the game is starting, instead
of blocking startup. If a shader is needed before it is compiled,
emulation will block.
As these are stored in a map, operator< will become a hot function when
doing lookups, which happen every frame. std::tie generated a rather
large function here with quite a few branches.
tl;dr: This PR speedups dolphin on mobiles with the Mali GPU and ES 3.2
drivers by a factor of 10 by using the method with the biggest overhead.
Please keep care not to buy this shit!
The ARM driver team seems to care very well about their customers. But
bad luck, users and open source developers are *not* their customers. So
even device-independent feature requests are just ignored for *years*:
https://community.arm.com/graphics/f/discussions/4645/gl_ext_buffer_storage-support
The bad point, they neither implement any of the other common ways to
stream dynamic content in unextented GL:
- They just ignore the GL_MAP_UNSYNCHRONIZED_BIT flag
- They don't support on-device buffer updates and just stall with
glBufferSubData
It seems like no benchmark is using any dynamic content - and like no
customer cares about anything but benchmarks, or users...
We have a flag to disable the glBufferSubData way, this PR adds the flag
to also disable the unsychronized mapping way. The second one is
available since their ES 3.2 update, but slow as hell.
So how to continue? The last remaining technical way to stream dynamic
content at all is to alloc a new buffer per draw call with glBufferData.
This is very gross, but still a factor 10 speedup compared to stalling
the GPU. Small tests shows that you can expect another 3-5 times speedup
with EXT_buffer_data, so Mali would be on pair with Adreno here. So if
you have bought such a device unfortunately, please try to make noise on
your vendor forums/support and ask for this extension. If you are going
to buy a new mobile, I'd recormend to avoid *any* mobile with a Mali GPU
in it.
We now differentiate between a resize event and surface change/destroyed
event, reducing the overhead for resizes in the Vulkan backend. It is
also now now safe to change the surface multiple times if the video thread
is lagging behind.
The option is named DisableCopyToVRAM under the Hacks section in
GFX.ini. It is intentionally not exposed to the GUI, as users should not
need to use it under normal circumstances. The main use is debugging
issues in the EFB-to-RAM shaders.
The console appears to behave against standard IEEE754 specification
here, in particular around how NaNs are handled. NaNs appear to have no
effect on the result, and are treated the same as positive or negative
infinity, based on the sign bit.
However, when the result would be NaN (inf - inf, or (-inf) - (-inf)),
this results in a completely fogged color, or unfogged color
respectively. We handle this by returning a constant zero for the A
varaible, and positive or negative infinity for C depending on the sign
bits of the A and C registers. This ensures that no NaN value is passed
to the GPU in the first place, and that the result of the fog
calculation cannot be NaN.
Otherwise we might get UB if the value we cast is larger than the
max value of the underlying type that the compiled picked for the enum.
I haven't done any extensive check through Dolphin to find cases
of this, I'm just fixing the cases I already know of.
Tested on a linux Intel Skylake integrated graphics with
blend_func_extended force-disabled, as it's the only platform I have
that doesn't crash with ubershaders and supports fb_fetch
It seems it doesn't like modifying inout variables in place - so instead
use a temporary for ocol0/ocol1 and only write them once at the end of
the shader
GLES doesn't support C-style array initialisers, so stuff like:
Type var[2] = {
VALUE_A,
VALUE_B
};
isn't supported in GLES (it was added in
GL_ARB_shading_language_420pack).
The texture conversion shader used this, so would fail to compile on
GLES.
HLSL does not define roundEven(), only round(). This means that the
output may differ slightly for OpenGL vs Direct3D. However, it ensures
consistency across OpenGL drivers, as round() in GLSL can go either way.
This fixes the rendering of the scan visor in Metroid Prime 2: Echoes,
as seen in https://fifoci.dolphin-emu.org/dff/mp2-scanner/
The alpha channel was off-by-one on Ivy Bridge due to the rounding
after multiplication with colmat. This commit removes this matrix
altogether in most cases, making them simple GLSL swizzles.