다음을 통해 공유


ADO.NET with Visual Basic .NET

Posted September 9, 2003

Chat Date: September 2, 2003

Please note: Portions of this transcript have been edited for clarity

Chat Participants:

  • Karol Zadora, VB Test
  • Steve Stein, VB UE
  • Michael Pizzo, SQL Architect
  • Hussein Abuthuraya, PSS
  • David Sceppa, SQL Test
  • Brad Rhodes, ADO.Net PM

Moderator: Ed_H (Microsoft)
Welcome to today’s chat on ADO.NET with Visual Basic .NET. We have members of the Visual Basic team here to answer your questions today. I will ask the hosts to introduce themselves.

Host: DavidSc (Microsoft)
My name is David Sceppa. I work with the Microsoft WebData team and am also the author of "Microsoft ADO.NET" available from Microsoft Press.

Host: Hussein (Microsoft)
Hi, This is Hussein with the Data Access using VB/VB.NET Support team

Host: bradrhod (Microsoft)
Hi, I am Brad Rhodes Lead Technical PM in ADO.Net

Host: karol (Microsoft)
Hi, I am Karol, a developer on the Visual Studio data design team.

Host: MikeP (Microsoft)
I'm Mike Pizzo; software architect in the WebData team (owners of ADO.NET and other data access interfaces...)

Host: Steve (Microsoft)
Hi all - I'm Steve Stein, and I'm on the VB Team

Moderator: Ed_H (Microsoft)
And I am Ed Hickey, VS Communities PM. Glad you all could make it today!

Moderator: Ed_H (Microsoft)
Let's get started! Fire away with your questions for our hosts.

Host: DavidSc (Microsoft)
Q: DAO is a superset...?
A: DAO is a predecessor of ADO and ADO.NET.

Host: bradrhod (Microsoft)
Q: Ok, I have a tough one. I would like to see a new connection string option "Start Server" for situations (like MSDE) where you don't know if the server is started...?
A: This is a feature that we are working on for SQL Server for the next version of ADO.Net. Cannot commit 100% at this time, but we are working hard to make this happen. We actually need changes in SQLServer to make this work well.

Host: MikeP (Microsoft)
Q: Is DAO is no more supported in .Net ?
A: By far the best data access interface for .NET is ADO.NET. It was designed from the bottom-up to integrate with the rest of the .NET framework, and address todays connected as well as disconnected enterprise scenarios.

Host: Hussein (Microsoft)
Q: Is there any way to get a ADO recordset from a ADO.Net DataTable besides manually creating it?
A: See the following KB: HOW TO: Convert an ADO.NET DataSet to ADO Recordset in Visual Basic .NET https://support.microsoft.com/default.aspx?scid=kb;en-us;316337

Host: MikeP (Microsoft)
Q: Can you guys tell anything about the next version of ADO.Net?
A: The next version of ADO.NET will build on the existing DataSet/Data Provider model, adding features to make data easier to use and to make performance and scalability even better in an enterprise environment.

Host: bradrhod (Microsoft)
Q: Is there any way to get a ADO recordset from a ADO.Net DataTable besides manually creating it...?
A: Not currently. You have to do this manually. We are not planning anything for the future. The concept for ADO Interop is provide developers a way to transition to ADO.Net from legacy ADO code. So we have not provided the reverse. With any conversion layer (like ADO Interop) there are a lot of issues that make it difficult to do. ....across the board. Given resource constraints and the level of investment it is not likely that we will provide ADO.Net dataset to ADO recordset conversions.

 

Host: MikeP (Microsoft)
Q: All DAO coding needed to be change to ADO.Net Coding?
A: The .NET Framework provides interop support for accessing COM libraries from new .NET framework applications. We have tested ADO access through com interop, but I'm not aware of significant testing with DAO. Your best bet, by far, will be to use ADO.NET for new .NET applications. It was designed to leverage existing knowledge of ADO and DAO, but to provide much better performance, integration, and scalability in the .NET environment.

Host: DavidSc (Microsoft)
Q: Is there a "better" (performance) way to do paging using ADO.NET, without using a DataGrid or something like that??
A: The most performant solution is to query the back end for just the page of data you want. You can achieve this through page "markers" where you know you're looking for, say rows 41-50. Another option is to use the TOP clause in conjunction with a sub-query.

Host: bradrhod (Microsoft)
Q: I'm with Bruno. Paging needs to be done on the engine. More, smaller rowsets are more efficient
A: We are considering some minor support in the next version of SQL Server. You are correct that this is best handled with support from the database engine. Problem is that when there are many concurrent users accessing the database at the same time with transactions, it is hard to come up with a row numbering scheme.

Host: MikeP (Microsoft)
Q: So is there a way to "autonumber" each row, in SQL result, so the clause would be something like SELECT * FROM Test WHERE AutoNumber BETWEEN (10 AND 20).
A: Line numbering is not part of standard SQL, although some vendors have extensions for adding row numbers to the results of a query. These types of generated row numbers can be used in a query in order to limit results, but if you have control over the query being generated you're generally better using a Primary Key in order to do paging. Use of a primary key in order to scroll through pages of values also ensures that positioning is not messed up by the addition/deletion of rows into the results being read.

Host: bradrhod (Microsoft)
Q: "Minor support" Does that mean it only works for developers under the age of 18...?
A: Nope. It is open to developers of all ages :-). It means that we are not doing a full row numbered paging solution. I was just pointing out that this problem is actually harder to support than first considered.

Host: Kawarjit (Microsoft)
A: chuckbuc - In this specific case the Parent-Child table have the PK-FK constraints and cascade on delete is set to none. Now if one attempts to delete the parent row, you’d get the exception. If you were to set to Cascade on delete, you won't get the exception, but the corresponding child rows will get deleted as well.

Host: Hussein (Microsoft)
Q: Using OleDb to connect with Jet Driver to Excel?
A: Make sure that you use a correct connection string. see KB: HOW TO: Use Jet OLE DB Provider 4.0 to Connect to ISAM Databases. https://support.microsoft.com/default.aspx?scid=kb;en-us;326548

Host: MikeP (Microsoft)
Q: You can query SELECT TOP 10 ..... WHERE PK BETWEEN LastHighest AND LastHighest+20
A: If you know the sort by columns, you can also build a query against SQL Server to select a range of rows (in this case, rows 90-100 from "Select * from customers", ordered by contactname) as follows: select * from (select top 10 * from (select top 100 * from (select * from customers) as t1 order by contactname desc ) as t2 order by contactname asc) as t3 order by contactname desc. Note, however, that this requires several sorts on the server, so you are still better off using Primary Key values.

Host: DavidSc (Microsoft)
Q: Is there a way to dissable or prevent the deletrow from occurring in a datagrid?
A: The DataGrid is actually bound to a DataView. Setting DataView.AllowDelete = False is the simplest way to prevent the user from deleting the row.

Host: Kawarjit (Microsoft)
A: DareKLL: I meant "cascade on delete". Referential integrity constraints kick in when you have a relation with PrimaryKey and Foreign keys. Cascade on delete will cascade on delete operation.

Host: bradrhod (Microsoft)
Q: Does ADO.NET "fix" the behavior of the Connection Lifetime argument in the connection string so it actually affects the time the connection remains in the pool...?
A: Actually this is by design. The design is to have the connection reconnect to the server after a certain amount of time has passed. The idea is that if you bring a second node (node B) in a virtual cluster on line, connection pooling will keep connecting to the first server. Using connection lifetime allows you to throw connections out and have the load rebalance after a new server comes on line. For this it is fine to have the connection closed next time it is opened when coming from the pool and beyond connection lifetime.
To actually throw them out actively we would have to have a fairly complex design that has a thread or another connection request go through the list of connections in the pool and throw out any stale ones. This is pretty trick to do. For the design point of load balancing connections to other servers, there is no real win for closign the connection while in the pool. To more aggressively reduce connections to the server you can just keep max pool size low.

Host: MikeP (Microsoft)
Q: Len: ... When you create a Command object and build a Parameters collection, ADO (or ADO.NET) handles the single quotes and other framing details for you. Len : ... yes I know. I'd like to know how it does it.
A: You need to quote values within the command text in order for the Query Processor to parse the statement correctly. When you specify parameters, they are sent in a structure separate from the query text, so the Query Processor doesn't have to parse, so quotes are not needed...

Host: Kawarjit (Microsoft)
Q: Joe : is there a way to have a virural column in an index when you create it?
A: DataTable supports expression columns. This can be thought of as derived columns. However, the expression can only be a simple expression like "c1+c2" of Sum(c1), it cannot be a SELECT expression.

Host: MikeP (Microsoft)
Q: I prefer sending in Params b/c it's more precise and easier to construct, but is there a performance benefit/hindrance to using them vs concatenated SQL Strings?
A: Passing in parameters may actually be more efficient, because it allows the Query Processor to re-use the query plan for multiple sets of parameters. There may also be conversions required in parsing string values when concatenated, while parameters are sent in their native types. Finally, but perhaps most importantly, concatenating SQL Strings is safer from the standpoint that you know exactly what query is being executed and how the "parameter" values are used.

Host: MikeP (Microsoft)
Q: We just have a ton of legacy code, so rewriting it with params and procs is not something he wants to do.
A: Don't confuse stored procs with params. Although stored procs (almost) always take parameters, parameters are also useful for dynamic SQL. So even if you don't want to move the commands into stored procs, passing parameters into dynamic SQL, instead of concatenating values, is a much better solution.

Host Guest_pablo_ms says:

Q: Using Blobs, I would like to support the ACCEPT_RANGE Http Header, and retrieve only the bytes requested.. Can I do that??
A: Most database servers have special syntax to get parts of blobs. In SQL Server, you can use TEXTPTR and READTEXT to fetch parts of a BLOB instead of the whole thing.There is some overhead (and you have to do some extra work to do that), so it's usually only worth it if this is very common in your app. You can look for "READTEXT" in the SQL server books online for more information on that.

Host: MikeP (Microsoft)
Q: I think you all have answered for me (I kind of figured it to be the case) , but even using Params without Procs is both cleaner (obviously) and more efficient (which is what I was wondering)
A: Correct.

Host: MikeP (Microsoft)
Q: However, you should only run CLR code when it can't be done in T-SQL. Performance hit supposedly.
A: Not necessarily. T-SQL is well tuned for data crunching (i.e., searched updates, select into, etc.) However, it is not as well tuned for processing logic. Many types of computations (i.e., computing factorials) can be more efficiently performed using ADO.Net.

Moderator: Ed_H (Microsoft)
This has been a GREAT chat. Thank you to everyone. Unfortunately, it is time to go. Thanks for participating, and we'll see you next time!

For further information on this topic please visit the following:

Newsgroups: microsoft.public.dotnet.languages.vb

VB .NET Transcripts: Read the archive of past VB .NET chats.

Website: Visit the Microsoft Visual Basic .NET site.

Top of PageTop of Page