Hi Alexios, I hope you are doing well. Now the final approach that I'm thinking about is that for ingestion, I plan to use the SPDX Python tools to parse v2 documents and convert them to RDF triples. For SPDXv3 documents, the data can be loaded more directly. In both cases, triples will be inserted into the shared triplestore via the abstraction library. Because spdxId is the sole identity key, inserting triples from a second SBOM that references an already-stored element will simply add new triples to the existing node. For export, I will build SPARQL queries that can reconstruct a consistent subgraph starting from a given root element, following its relationships transitively. The output will be serialized back into a valid SPDX document. Round-trip consistency will serve as the main correctness benchmark. I plan to validate the exported document using the SHACL rules published by the SPDX project, as you suggested, rather than validating at ingestion time. For management utilities, I will implement basic CRUD operations, will list stored SBOMs, delete by spdxId, and check for dangling references after deletion. Regarding profiles: The same ingestion and export logic will handle all profiles without modification. I'm testing the tooling against both SPDXv3 samples and SPDXv2 documents converted to RDF, across at least two triplestore backends, to verify backend independence. Does this direction seem well-aligned with your expectations for the project? I would really appreciate any corrections before I finalize the proposal. Best regards, Manav Gupta
---- Λαμβάνετε αυτό το μήνυμα απο την λίστα: Λίστα αλληλογραφίας και συζητήσεων που απευθύνεται σε φοιτητές developers \& mentors έργων του Google Summer of Code - A discussion list for student developers and mentors of Google Summer of Code projects., https://lists.ellak.gr/gsoc-developers/listinfo.html Μπορείτε να απεγγραφείτε από τη λίστα στέλνοντας κενό μήνυμα ηλ. ταχυδρομείου στη διεύθυνση <gsoc-developers+unsubscribe [ at ] ellak [ dot ] gr>.