Hello and thank you for providing detailed information!
I have downloaded the arrAsset and I'm analyzing it now.
Looking at the output.json, the first thing that comes to mind is that the model's box is far off-center:
"boundingBox": {
"min": [
-76.05452728271485,
-0.04086558520793915,
342.1785583496094
],
"max": [
-41.80239486694336,
2.1324777603149416,
370.4200439453125
]
So the center of the scene is somewhere at ~350 meter in z-direction and that's behind the (default) far clip plane of the renderer.
Are you compensating for this offset in your app by transforming the model? Anyway, I'm not sure if that is the real issue here, but I'll find out.
To answer your other questions:
Q1: No, we don't expose this kind of debug functionality out-of-the-box. (Unless of course you do this fully manually through the client API: load the model, find the node, find the material, grab the material's albedo texture handle, then load a small model programmatically, and assign the texture handle to the small model's material)
Q2: There is no hard limit on the number of textures that can be used. There is only a limit on the maximum size of a single texture (16k x 16k), which is in fact a GPU hardware limit. I think I can make the docs more specific here.
Q3: ARR conversion should keep the coordinates as-is, unless you specify the "recenter to origin" option. In that case it's indeed harder to align multiple models because each one recenters to its own average position. However you can look into this section of the json output:
"recenteringOffset": [
0.0,
0.0,
0.0
],
...to see by which amount the object had been moved during conversion (it is (0,0,0) when re-centering was disabled, like in this case). Aligning should be about re-applying this offset to every model.
I'll keep you updated on my findings with the arrAsset.
Let me know if you have more questions!
Cheers,
Florian