Split pipe delimited columns in TSQL

SQL Whisper 1 Reputation point

what's the best way to spilt the pipe delimited columns in SQL 2017.


declare @table table (sno int,sname varchar(200),sname1 varchar(200))  
Insert into @Table(sno,sname,sname1)  
Values (1,'Mark|James','Dallas|Houston')  

Excepted Output:

1 Mark Dallas
1 James Houston

Thank you.

A Microsoft extension to the ANSI SQL language that includes procedural programming, local variables, and various support functions.
4,552 questions
{count} votes

7 answers

Sort by: Most helpful
  1. Ronen Ariely 15,096 Reputation points

    Good day,

    As of SQL Server 2016 you have a built in function named STRING_SPLIT

    With that being said, this question implies optionality of a very bad design of your relational database and you should really re-think about your database design or/and the type of the database which you chose to use

  2. Yitzhak Khabinsky 24,941 Reputation points

    Hi @SQL Whisper ,

    As @Ronen Ariely already pointed out, it is better to use XML (or Json) for such cases.

    Please try the following solution that is based on XML and XQuery.
    It is a two step process:

    1. CTE is composing a proper XML.
    2. SELECT outputs desired relational/rectangular output.

    The CTE emits XML structured as follows:

      <r name="Mark" city="Dallas" />  
      <r name="James" city="Houston" />  


    -- DDL and sample data population, start  
    DECLARE @table table (sno INT, sname VARCHAR(200), sname1 VARCHAR(200))  
    INSERT INTO @Table (sno,sname,sname1) VALUES   
    (2,'Mary|Katy|Paula','Miami|Fort Lauderdale|Orlando'); -- # of tokens could be dynamic  
    -- DDL and sample data population, end  
    DECLARE @separator CHAR(1) = '|';  
    ;WITH rs AS  
     SELECT sno  
     , TRY_CAST('<root><r>' +   
     REPLACE(sname + @separator + sname1, @separator, '</r><r>') +   
     '</r></root>' AS XML).query('<root>  
        let $counter := count(/root/r) div 2  
        for $x in /root/r[position() le $counter]  
        let $pos := count(/root/r[. << $x[1]]) + 1  
        return <r name="{$x/text()}" city="{/root/r[$pos + $counter]/text()}"></r>  
     </root>') AS xmldata  
     FROM @table  
    SELECT rs.sno  
     , n.value('@name','VARCHAR(30)') AS [name]  
     , n.value('@city','VARCHAR(30)') AS [city]  
    FROM rs   
     CROSS APPLY xmldata.nodes('/root/r') AS t(n)  
    ORDER BY rs.sno;  


    | sno | name  |      city       |  
    |   1 | Mark  | Dallas          |  
    |   1 | James | Houston         |  
    |   2 | Mary  | Miami           |  
    |   2 | Katy  | Fort Lauderdale |  
    |   2 | Paula | Orlando         |  
    0 comments No comments

  3. Ronen Ariely 15,096 Reputation points

    Hi :-)

    According to your answer to my clarification question the following should solve your need (please confirm)


    First I create table for test with more rows and options than the opn you provided

    CREATE table T1 (sno int,sname varchar(200),sname1 varchar(200))  
    Insert into T1(sno,sname,sname1)  
     (1,'1A','1a'), (1,'2A|2B','2a|2b'),   
     (1,'3A|3B|3C','3a|3b|3c'), (1,'4A|4B|4C|4D','4a|4b|4c|4d')  
    SELECT * FROM T1  


    And here is the solution

    ;With MyCTE0 as (  
     select sno, sname,sname1  
     ,s = '["' + Replace(sname, '|', '","') + '"]'  
     ,s1 = '["' + Replace(sname1, '|', '","') + '"]'  
     from T1  
    MyCTE1 as (  
     select sno, s, s1  
     , k1 = t.[key], v1 = t.[value]  
     from MyCTE0  
     CROSS APPLY OPENJSON (s, N'$') t  
    SELECT sno, v1 , v2  
    FROM MyCTE1  
    CROSS APPLY (SELECT t1.[key] k2 , t1.[value] v2 FROM OPENJSON (s1, N'$') t1 where t1.[key] = MyCTE1.k1) t  


    0 comments No comments

  4. Erland Sommarskog 100.9K Reputation points MVP

    Here is a solution that does not use XML or JSON (which both can result in problems if the data includes characters with a special function in these formats):

          declare @Table table (sno int,sname varchar(200),sname1 varchar(200))
         Insert into @Table(sno,sname,sname1)
         Values (1,'Mark|James','Dallas|Houston')
    SELECT t.sno, s.str + ' ' + s1.str
    FROM   @Table t
    CROSS  APPLY iter_charlist_to_tbl(t.sname, '|') AS s
    CROSS  APPLY iter_charlist_to_tbl(t.sname1, '|') AS s1
    WHERE  s.listpos = s1.listpos

    You find the code for iter_charlist_to_tbl here: http://www.sommarskog.se/arrays-in-sql-2005.html#iter-list-of-strings

  5. Erland Sommarskog 100.9K Reputation points MVP

    Yet a solution. This one is based on a performance test I did, and where I tried the solutions posted by Yitzhak, Ronen and me with a fairly large data set. I gave all ten minutes to run before I killed them. Looking at the plans, Ronen's is probably the one that would have completed first, but in how many hours, I don't know.

    While I was waiting, I wrote a new solution which completed in three minutes. With a faster string splitter the time can be reduced further. But the keys is that the two strings need to split separately, so that we can join over an index.

    SQLWhisper will have to accept that the names have changed. id = sno, ShipName = sname, ShipAddress = sname1

    CREATE TABLE #t1 (id  int NOT NULL,  
                      pos int NOT NULL,  
                      ShipName nvarchar(60) NOT NULL,  
                      PRIMARY KEY (id, pos))  
    CREATE TABLE #t2 (id  int NOT NULL,  
                      pos int NOT NULL,  
                      ShipAddress nvarchar(60) NOT NULL,  
                      PRIMARY KEY (id, pos))  
    INSERT #t1(id, pos, ShipName)  
      SELECT t.id, s.listpos, s.nstr  
      FROM   PipeData t  
      CROSS  APPLY iter_charlist_to_tbl(t.ShipName, '|') AS s  
    INSERT #t2(id, pos, ShipAddress)  
      SELECT t.id, s.listpos, s.nstr  
      FROM   PipeData t  
      CROSS  APPLY iter_charlist_to_tbl(t.ShipAddress, '|') AS s  
    SELECT t1.id, t1.ShipName + ' ' + t2.ShipAddress as str  
    FROM   #t1 t1  
    JOIN   #t2 t2 ON t1.id = t2.id  
                 AND t1.pos = t2.pos  

    For the curious, my test script is here: 34739-pipedatasplt.txt. The BigDB database that I load the data from is on http://www.sommarskog.se/present/BigDB.bak. Warning! This is a 3GB download, and the full database size is 20 GB. SQL 2016 or later is needed. (This is a demo database that I had uploaded already.)