X Error of failed request: BadValue (integer parameter out of range for operation)Might be fixable with Wine knowledge beyond what I have (probably whatever it takes to get the average 3d game running would deal with it).
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 171
Current serial number in output stream: 172
Press key Enter to Mark current colour in the palette.
while ctrl, marks entire block of active colours beneath.
More specifically, key Enter toggles the mark,
and ctrl + enter toggles mark for entire block;
unmarked colours become marked,
marked colours become unmarked.
The mark is remembered when moving colours
and on transfer between different palettes.
When using ctrl with the RGB colour component modifier keys,
the change now affects only all the marked colours in the palette.
This allows more effective colour palette management.
The Blackbox principle immortalizes pixel art. It's like the sentence of Pythagoras in math.Always nice to feel like that about something. What kind of pixel precision issues are you talking about though? (didn't notice anything odd in that last gif)
An eternal truth. Limitless. Never again can hardware outpace it.
Everything else is insignificant to its scale.
It is the law of the world.
No matter the screen technology or processing, this interpretation of pixel art adapts and trounces.
Its abstraction and virtualization of the grid distills the ultimate core creativity of pixel art.
Other trends and tricks in graphics may come and go. The basic pixel blueprint to the universe stays forever.
Godly pixel power. That spells judgment on all things. Tremble, mortal. For you have sinned against the sacred art.
could you show some screen or vid of the precision artifacts?
Sorry, I'm late.
This was how it sometimes looked before the fix for the current size settings:
(http://pandemonium.graphics/img/artifacts.png)
However, even for larger sizes this can be avoided by painting most cubes wholesome instead of different colours by side.
When going way too large on the settings though, the geometry just goes totally haywire.
Download update version 0.1y (http://pandemonium.graphics/files/BVT_v01y.zip)
Fixed crash when a frame exceeds its voxel capacity.
Fixed palette management corrupting other frames.
Fixed possible issue with selection on frame switch.
Fixed palette transfers not updating other frames.
Fixed new frame palette not set to current palette.
Fixed colour manipulation while in animation mode.
Fixed boundaries in mass colour modification.
Changed number of slots in palatte from 1024 to 256.
(may increase again later if necessary, but want to see this play out first.)
Changed default frame time modifier step from 25 milliseconds to 1/60 fps.
Added File System. You can now save to file.
key F5 for saving model and palette to file save.bvt
shift + F5 for saving only model to file save.vmp.
ctrl + F5 for saving only colour palette to file save.vcp.
key F6 for loading model and palette from file save.bvt.
shift + F6 for loading only model from file save.vmp.
ctrl + F6 for loading only colour palette from file save.vcp.
The file system allows for many changes while keeping backwards compatibility.
Your old work files will always be usable in newer versions of Blackbox.
Currently the file sizes are relatively large, at about 85 MB for a full scene.
And loading a maximum scene may take several seconds, during which the program seems to freeze.
File sizes and load times will be greatly improved in later versions.
It is possible that a person saves a full scene work to file and gives it to another person.
The other person may have less video memory than the first. In that case the scene cannot be fully loaded.
Though it will try to salvage and load as much as possible from the scene as fits into the smaller memory.
Sorry, I'm late.Seeing your former wireframes and seen this, I think the issue you have is called "T-junctions". It can be partially blamed to the limited precision of digital numbers, but even in real life you cannot represent some values simply by reals. e.g. 1.0/3.0 == 0.333333....
This was how it sometimes looked before the fix for the current size settings:
(http://pandemonium.graphics/img/artifacts.png)
Thanks, Naret. You're a good read and the links you give valuable resources.
Since that patch I haven't encountered this problem anymore, and if it is there it might be just a single stray pixel on the HD screen, practically unnoticeable. So it has become low priority compared to other concerns, such as greater optimization of memory and performance. I think the reason for why it works out so well right now is that not only are the values for coordinates aligned on a regular grid, they are well-defined, clean, simple, short values by themselves that have a lower chance to provoke trouble in the math, and so leave a lot of headroom in precision for the renderer's own calculations.
The current scale gives me a lot of advantages in further crunching down on memory, and seems to strike a nice balance in size and the voxel count I'll aim for. So considering all that, I currently don't feel inclined to push it further. But if I do, I will have to look for strategies such as the ones you suggested, to make the most out of the significant precision range for a maximum of subdivision levels on the given type's bit size.
Hrrm, but I'm still not settled on the issue. It may still turn out that the things I had originally in mind prove a failure or not worth it in comparison. It's possible I will have to resort to more classic meshing strategies such as described in your latest link. It may turn out that it would simply end up the best of both worlds with the least of problems, all things considered. But we'll have to wait and see. It's a long road, a lot of work.
But it keeps being a tough question. It's trade offs of problems really. The reason I avoided automated mesh optimizations is because of how much it adds runtime costs on model manipulations. Even though there is a variety of methods with more or less costs, even the fastest does add a substantial amount. Blackbox tries to keep that experience more fluid, and rather relies on the artist to sufficiently optimize the grid manually by multi-resolution construction. The poly count of that is less ideal than by auto mesh methods that go beyond cube form. So it's a question of framerate costs in that versus latency on model changes. Add to that, I've grown to like the current cubic organization of the grid composition from a work perspective. The plans on how to optimize memory differently aims to keep that. And rather have artifacts compensated for at no costs by raw precision range. Balancing all these considerations is a difficult question. For the time being I will keep going with the current approach, to concentrate building the rest of the core features. At some point though, when seeing it all play out, a decision must be made which kind of costs we'd rather settle with, or which kind of costs gets along best with the specific case of application. Maybe even multiple render modes for choice. Hrrm.
The reason I avoided automated mesh optimizations is because of how much it adds runtime costs on model manipulations. Even though there is a variety of methods with more or less costs, even the fastest does add a substantial amount. Blackbox tries to keep that experience more fluid, and rather relies on the artist to sufficiently optimize the grid manually by multi-resolution construction. The poly count of that is less ideal than by auto mesh methods that go beyond cube form. So it's a question of framerate costs in that versus latency on model changes. Add to that, I've grown to like the current cubic organization of the grid composition from a work perspective. The plans on how to optimize memory differently aims to keep that. And rather have artifacts compensated for at no costs by raw precision range. Balancing all these considerations is a difficult question. For the time being I will keep going with the current approach, to concentrate building the rest of the core features. At some point though, when seeing it all play out, a decision must be made which kind of costs we'd rather settle with, or which kind of costs gets along best with the specific case of application. Maybe even multiple render modes for choice. Hrrm.I agree, a fluid working tool is very important; what is the fastest and most correct rendering worth, if you cannot edit it?