I recently ran into a problem in an SQL script where the insert statement (which inserted a few rows) had a column of type varchar (10) and the rows of the data it inserted went into ints.
However, when I added a new row of data to insert and used varchar in the varchar column, SQL tried to convert it to int, although the column is of type varchar (10)
Here is an example that you can run locally.
CREATE TABLE dbo.TestTable ( VarcharColumn varchar(10) NOT NULL ) ON [PRIMARY] insert into TestTable(VarcharColumn) Values (1)
When you run this, it only inserts a penalty, SQL should silently convert this integer value to varchar behind the scenes.
However, if you try this:
insert into TestTable(VarcharColumn) Values (1),(2),('Hello')
SQL produces the following error:
Conversion error when converting varchar 'Hello' value to data of type int.
Can someone offer an explanation of the inner workings of SQL in this case? In particular:
- Why SQL allows integers to be inserted into varchar columns.
- Why, when SQL silently converts these values, if it works through the actual varchar, it will try to convert it to int.
I understand how to fix this problem and how to avoid it in the first place, but I'm looking for an explanation of why SQL works this way.
source share