You can use SSIS to load the files - in the connection manager you would define the file as ragged-right with a single field for the entire record from the file. You then add a row number function to the constraint - which adds a row number for every row that is read from the pipeline. This row number becomes your record identifier and can then be used to select the data in row number order.
To parse the data - you can do that in SSIS or SQL. In SSIS you add a script component and define 3 separate outputs...a header output, detail and trailer outputs. In the script component you then evaluate each row and determine if that row is a header, detail or trailer record, parse the row according to the requirements (using substring if fixed length or comma separate processing) and populate each output field. In SQL Server - you will have separate tables for each type and you can use substring (for fixed length) or string_split for delimited to parse each field (and validate data type).
The key is in SSIS you add the row number function to the pipeline so that each record/row read from the file has a row number identifier.
https://richardswinbank.net/ssis/adding_row_numbers_to_ssis_dataflows
http://www.sqlis.com/post/row-number-transformation.aspx
https://www.timmitchell.net/post/2015/05/26/row-numbers-and-running-totals-in-ssis/
You can also combine the row number function into the script component that creates the separate outputs - so each output type would have different counters and row numbers, in case you have multiple header/trailer records.