supported splitting packages#19
supported splitting packages#19jxq1997216 wants to merge 3 commits intoValveResourceFormat:masterfrom
Conversation
…repare to adjust the code structure of the writing part in the future
… files under special conditions
|
This needs tests (ideally without writing gigabytes to disk though) |
Excuse me, may I ask what I need to do? |
|
|
||
| var fileTreeSize = stream.Position - headerSize; | ||
| //clear sub file | ||
| for (ushort i = 0; i < 999; i++) |
There was a problem hiding this comment.
This is to delete the subcontracted files produced by previous tasks. I believe that when users reduce the maximum number of bytes and recreate the subcontracted files, the existence of the previous subcontracted files can be very confusing for users
There was a problem hiding this comment.
That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.
There was a problem hiding this comment.
That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.
You're right, we shouldn't help users make decisions without authorization
Okay, let me try something. I haven't written anything similar before |
|
|
||
| namespace SteamDatabase.ValvePak | ||
| { | ||
| internal sealed class WriteEntry(ushort archiveIndex, uint fileOffset, PackageEntry entry) |
There was a problem hiding this comment.
I don't think this is needed. You can calculate the ArchiveIndex directly in AddFile.
You can look at Valve's packedstore.cpp to see how they handle adding files:
- CPackedStore::AddFile has a bMultiChunk bool.
- They keep track of m_nHighestChunkFileIndex and then increase it if the file offset is higher than m_nWriteChunkSize which defaults to 200 * 1024 * 1024 bytes.
There was a problem hiding this comment.
That sounds good,I should go take a look at packdstore.cpp,Can you tell me where it is?
| const byte NullByte = 0; | ||
|
|
||
| // File tree data | ||
| bool isSingleFile = entries.Sum(s => s.TotalLength) + headerSize + 64 <= maxFileBytes; |
There was a problem hiding this comment.
I don't like using maxFileBytes here, we should just have a bool to specify that we want to multi chunk.
This size calculation is also gonna be incorrect if we want to write file hashes.
|
We currently have this, but this ideally should be calculated for the chunks: Ref in valve's code: HashAllChunkFiles |
Actually, I'm not quite sure how to calculate the hash value here. I think I should first take a look at cstrike15_strc |
Now write package is supported splitting packages,You can use it like this
package.Write("writePath",maxPackageBytes);