Thursday, March 8, 2012
caching problem
I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
ram , intel xeon 8 processor machine.
We have 2 files file1 with 15,000 query and file2 with 60,000 query.
if run file2 alone it takes first 6 mins and if i run it second time
it takes 2 mins.
so this time it uses caches and returns result in 2 mins
So its taking benefit of cache.
file2 first 5 min
file2 second 2 min
But when i run file1 it takes 2 minutes and file2 takes 6 mins.
After then again if i run file1 again it takes again 2 mins and file2
takes 5 mins.
memory goes to 10 GB from 16 GB.
file1 first 2 min
file2 first 5 min
file1 second 2 min
file2 second 5 min
here second time when i run this files it doesn't take benefit of
cache and take same time
so why when i run both files sequentially 2 times 2nd time its not
taking benefit of caching even 6 GB of memory available.If there is 6
GB memory available then it should keep all data in cache and give
fast results second time.
Is there any limitation for caching or query plan? running 2 different
files will drop cache?
What exactly do you mean by files and 15,000 query? And if sql server is
using all 16GB then you have a problem with your configuration. You need to
set a MAX Memory setting of no more than 14GB to leave room for the OS to do
it's job.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
> Hi all gurus
> I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
> ram , intel xeon 8 processor machine.
> We have 2 files file1 with 15,000 query and file2 with 60,000 query.
>
> if run file2 alone it takes first 6 mins and if i run it second time
> it takes 2 mins.
> so this time it uses caches and returns result in 2 mins
> So its taking benefit of cache.
> file2 first 5 min
> file2 second 2 min
> But when i run file1 it takes 2 minutes and file2 takes 6 mins.
> After then again if i run file1 again it takes again 2 mins and file2
> takes 5 mins.
> memory goes to 10 GB from 16 GB.
> file1 first 2 min
> file2 first 5 min
> file1 second 2 min
> file2 second 5 min
> here second time when i run this files it doesn't take benefit of
> cache and take same time
> so why when i run both files sequentially 2 times 2nd time its not
> taking benefit of caching even 6 GB of memory available.If there is 6
> GB memory available then it should keep all data in cache and give
> fast results second time.
> Is there any limitation for caching or query plan? running 2 different
> files will drop cache?
|||On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
wrote:[vbcol=seagreen]
> What exactly do you mean by files and 15,000 query? And if sql server is
> using all 16GB then you have a problem with your configuration. You need to
> set a MAX Memory setting of no more than 14GB to leave room for the OS to do
> it's job.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
> <shahdha...@.gmail.com> wrote in message
> news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
>
>
>
>
>
Hi Andrew
there are about 15000 select queries in first file and 60000 select
queries in second file. And MS is not using more than 10 GB.
Problem is that if i run only one file 2 times then at second time is
uses cache, but if I run file sequentially 2 time then its not using
cache.
why its doing so?
|||60K selects? Wow that is one heck of a report. You said initially that
the memory went from 10GB to 16GB and now you are saying it is not using
more than 10GB. How are you determining this? What I said memory wise is
still true regardless and should be changed if not that way now. The
scenario you use is in no way proof that the cache is not being used. There
are many things that can cause this behavior. Poorly executed parallel plans
or blocking for instance can easily account for these numbers. Did you check
for blocking? Did you look at the file stats to see how much physical I/O
you are doing? And how large is the database itself?
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:f38dfcd7-42f4-4875-a6fb-9326c08a55b5@.i72g2000hsd.googlegroups.com...
> On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
> wrote:
> Hi Andrew
> there are about 15000 select queries in first file and 60000 select
> queries in second file. And MS is not using more than 10 GB.
> Problem is that if i run only one file 2 times then at second time is
> uses cache, but if I run file sequentially 2 time then its not using
> cache.
> why its doing so?
|||On Jan 11, 12:22 am, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
wrote:[vbcol=seagreen]
> 60K selects? Wow that is one heck of a report. You said initially that
> the memory went from 10GB to 16GB and now you are saying it is not using
> more than 10GB. How are you determining this? What I said memory wise is
> still true regardless and should be changed if not that way now. The
> scenario you use is in no way proof that the cache is not being used. There
> are many things that can cause this behavior. Poorly executed parallel plans
> or blocking for instance can easily account for these numbers. Did you check
> for blocking? Did you look at the file stats to see how much physical I/O
> you are doing? And how large is the database itself?
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
> <shahdha...@.gmail.com> wrote in message
> news:f38dfcd7-42f4-4875-a6fb-9326c08a55b5@.i72g2000hsd.googlegroups.com...
>
>
>
>
>
>
>
>
sorry there is a typo. from 16 gb memory 10 gb is used. so i think if
6 gb is available , I got this info from task manager it showes 10 gb
uses, then in second case when 2 files are sequentially run 2 times
then why its not using cache 2nd times?
|||Task manager does not tell you how much memory SQL Server is really using
you should use the Perfmon counters for that. What is the Min and Max
memory set to in SQL Server? And regardless that does not tell you if the
data is cached or not. There are too many other factors that can be
involved. Until you get the actual metrics that I mentioned you are just
guessing and probably barking up the wrong tree.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:6b8e2894-1ccf-4b5c-9a3c-e6ed7c3208c7@.j78g2000hsd.googlegroups.com...
> On Jan 11, 12:22 am, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
> wrote:
> sorry there is a typo. from 16 gb memory 10 gb is used. so i think if
> 6 gb is available , I got this info from task manager it showes 10 gb
> uses, then in second case when 2 files are sequentially run 2 times
> then why its not using cache 2nd times?
>
caching problem
I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
ram , intel xeon 8 processor machine.
We have 2 files file1 with 15,000 query and file2 with 60,000 query.
if run file2 alone it takes first 6 mins and if i run it second time
it takes 2 mins.
so this time it uses caches and returns result in 2 mins
So its taking benefit of cache.
file2 first 5 min
file2 second 2 min
But when i run file1 it takes 2 minutes and file2 takes 6 mins.
After then again if i run file1 again it takes again 2 mins and file2
takes 5 mins.
memory goes to 10 GB from 16 GB.
file1 first 2 min
file2 first 5 min
file1 second 2 min
file2 second 5 min
here second time when i run this files it doesn't take benefit of
cache and take same time
so why when i run both files sequentially 2 times 2nd time its not
taking benefit of caching even 6 GB of memory available.If there is 6
GB memory available then it should keep all data in cache and give
fast results second time.
Is there any limitation for caching or query plan? running 2 different
files will drop cache?What exactly do you mean by files and 15,000 query? And if sql server is
using all 16GB then you have a problem with your configuration. You need to
set a MAX Memory setting of no more than 14GB to leave room for the OS to do
it's job.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
> Hi all gurus
> I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
> ram , intel xeon 8 processor machine.
> We have 2 files file1 with 15,000 query and file2 with 60,000 query.
>
> if run file2 alone it takes first 6 mins and if i run it second time
> it takes 2 mins.
> so this time it uses caches and returns result in 2 mins
> So its taking benefit of cache.
> file2 first 5 min
> file2 second 2 min
> But when i run file1 it takes 2 minutes and file2 takes 6 mins.
> After then again if i run file1 again it takes again 2 mins and file2
> takes 5 mins.
> memory goes to 10 GB from 16 GB.
> file1 first 2 min
> file2 first 5 min
> file1 second 2 min
> file2 second 5 min
> here second time when i run this files it doesn't take benefit of
> cache and take same time
> so why when i run both files sequentially 2 times 2nd time its not
> taking benefit of caching even 6 GB of memory available.If there is 6
> GB memory available then it should keep all data in cache and give
> fast results second time.
> Is there any limitation for caching or query plan? running 2 different
> files will drop cache?|||On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
wrote:
> What exactly do you mean by files and 15,000 query? And if sql server is
> using all 16GB then you have a problem with your configuration. You need to
> set a MAX Memory setting of no more than 14GB to leave room for the OS to do
> it's job.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
> <shahdha...@.gmail.com> wrote in message
> news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
> > Hi all gurus
> > I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
> > ram , intel xeon 8 processor machine.
> > We have 2 files file1 with 15,000 query and file2 with 60,000 query.
> > if run file2 alone it takes first 6 mins and if i run it second time
> > it takes 2 mins.
> > so this time it uses caches and returns result in 2 mins
> > So its taking benefit of cache.
> > file2 first 5 min
> > file2 second 2 min
> > But when i run file1 it takes 2 minutes and file2 takes 6 mins.
> > After then again if i run file1 again it takes again 2 mins and file2
> > takes 5 mins.
> > memory goes to 10 GB from 16 GB.
> > file1 first 2 min
> > file2 first 5 min
> > file1 second 2 min
> > file2 second 5 min
> > here second time when i run this files it doesn't take benefit of
> > cache and take same time
> > so why when i run both files sequentially 2 times 2nd time its not
> > taking benefit of caching even 6 GB of memory available.If there is 6
> > GB memory available then it should keep all data in cache and give
> > fast results second time.
> > Is there any limitation for caching or query plan? running 2 different
> > files will drop cache?
Hi Andrew
there are about 15000 select queries in first file and 60000 select
queries in second file. And MS is not using more than 10 GB.
Problem is that if i run only one file 2 times then at second time is
uses cache, but if I run file sequentially 2 time then its not using
cache.
why its doing so?|||60K selects? Wow that is one heck of a report. You said initially that
the memory went from 10GB to 16GB and now you are saying it is not using
more than 10GB. How are you determining this? What I said memory wise is
still true regardless and should be changed if not that way now. The
scenario you use is in no way proof that the cache is not being used. There
are many things that can cause this behavior. Poorly executed parallel plans
or blocking for instance can easily account for these numbers. Did you check
for blocking? Did you look at the file stats to see how much physical I/O
you are doing? And how large is the database itself?
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:f38dfcd7-42f4-4875-a6fb-9326c08a55b5@.i72g2000hsd.googlegroups.com...
> On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
> wrote:
>> What exactly do you mean by files and 15,000 query? And if sql server
>> is
>> using all 16GB then you have a problem with your configuration. You need
>> to
>> set a MAX Memory setting of no more than 14GB to leave room for the OS to
>> do
>> it's job.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>> <shahdha...@.gmail.com> wrote in message
>> news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
>> > Hi all gurus
>> > I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
>> > ram , intel xeon 8 processor machine.
>> > We have 2 files file1 with 15,000 query and file2 with 60,000 query.
>> > if run file2 alone it takes first 6 mins and if i run it second time
>> > it takes 2 mins.
>> > so this time it uses caches and returns result in 2 mins
>> > So its taking benefit of cache.
>> > file2 first 5 min
>> > file2 second 2 min
>> > But when i run file1 it takes 2 minutes and file2 takes 6 mins.
>> > After then again if i run file1 again it takes again 2 mins and file2
>> > takes 5 mins.
>> > memory goes to 10 GB from 16 GB.
>> > file1 first 2 min
>> > file2 first 5 min
>> > file1 second 2 min
>> > file2 second 5 min
>> > here second time when i run this files it doesn't take benefit of
>> > cache and take same time
>> > so why when i run both files sequentially 2 times 2nd time its not
>> > taking benefit of caching even 6 GB of memory available.If there is 6
>> > GB memory available then it should keep all data in cache and give
>> > fast results second time.
>> > Is there any limitation for caching or query plan? running 2 different
>> > files will drop cache?
> Hi Andrew
> there are about 15000 select queries in first file and 60000 select
> queries in second file. And MS is not using more than 10 GB.
> Problem is that if i run only one file 2 times then at second time is
> uses cache, but if I run file sequentially 2 time then its not using
> cache.
> why its doing so?|||On Jan 11, 12:22 am, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
wrote:
> 60K selects? Wow that is one heck of a report. You said initially that
> the memory went from 10GB to 16GB and now you are saying it is not using
> more than 10GB. How are you determining this? What I said memory wise is
> still true regardless and should be changed if not that way now. The
> scenario you use is in no way proof that the cache is not being used. There
> are many things that can cause this behavior. Poorly executed parallel plans
> or blocking for instance can easily account for these numbers. Did you check
> for blocking? Did you look at the file stats to see how much physical I/O
> you are doing? And how large is the database itself?
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
> <shahdha...@.gmail.com> wrote in message
> news:f38dfcd7-42f4-4875-a6fb-9326c08a55b5@.i72g2000hsd.googlegroups.com...
> > On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
> > wrote:
> >> What exactly do you mean by files and 15,000 query? And if sql server
> >> is
> >> using all 16GB then you have a problem with your configuration. You need
> >> to
> >> set a MAX Memory setting of no more than 14GB to leave room for the OS to
> >> do
> >> it's job.
> >> --
> >> Andrew J. Kelly SQL MVP
> >> Solid Quality Mentors
> >> <shahdha...@.gmail.com> wrote in message
> >>news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
> >> > Hi all gurus
> >> > I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
> >> > ram , intel xeon 8 processor machine.
> >> > We have 2 files file1 with 15,000 query and file2 with 60,000 query.
> >> > if run file2 alone it takes first 6 mins and if i run it second time
> >> > it takes 2 mins.
> >> > so this time it uses caches and returns result in 2 mins
> >> > So its taking benefit of cache.
> >> > file2 first 5 min
> >> > file2 second 2 min
> >> > But when i run file1 it takes 2 minutes and file2 takes 6 mins.
> >> > After then again if i run file1 again it takes again 2 mins and file2
> >> > takes 5 mins.
> >> > memory goes to 10 GB from 16 GB.
> >> > file1 first 2 min
> >> > file2 first 5 min
> >> > file1 second 2 min
> >> > file2 second 5 min
> >> > here second time when i run this files it doesn't take benefit of
> >> > cache and take same time
> >> > so why when i run both files sequentially 2 times 2nd time its not
> >> > taking benefit of caching even 6 GB of memory available.If there is 6
> >> > GB memory available then it should keep all data in cache and give
> >> > fast results second time.
> >> > Is there any limitation for caching or query plan? running 2 different
> >> > files will drop cache?
> > Hi Andrew
> > there are about 15000 select queries in first file and 60000 select
> > queries in second file. And MS is not using more than 10 GB.
> > Problem is that if i run only one file 2 times then at second time is
> > uses cache, but if I run file sequentially 2 time then its not using
> > cache.
> > why its doing so?
sorry there is a typo. from 16 gb memory 10 gb is used. so i think if
6 gb is available , I got this info from task manager it showes 10 gb
uses, then in second case when 2 files are sequentially run 2 times
then why its not using cache 2nd times?|||Task manager does not tell you how much memory SQL Server is really using
you should use the Perfmon counters for that. What is the Min and Max
memory set to in SQL Server? And regardless that does not tell you if the
data is cached or not. There are too many other factors that can be
involved. Until you get the actual metrics that I mentioned you are just
guessing and probably barking up the wrong tree.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
<shahdharti@.gmail.com> wrote in message
news:6b8e2894-1ccf-4b5c-9a3c-e6ed7c3208c7@.j78g2000hsd.googlegroups.com...
> On Jan 11, 12:22 am, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
> wrote:
>> 60K selects? Wow that is one heck of a report. You said initially that
>> the memory went from 10GB to 16GB and now you are saying it is not using
>> more than 10GB. How are you determining this? What I said memory wise
>> is
>> still true regardless and should be changed if not that way now. The
>> scenario you use is in no way proof that the cache is not being used.
>> There
>> are many things that can cause this behavior. Poorly executed parallel
>> plans
>> or blocking for instance can easily account for these numbers. Did you
>> check
>> for blocking? Did you look at the file stats to see how much physical
>> I/O
>> you are doing? And how large is the database itself?
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>> <shahdha...@.gmail.com> wrote in message
>> news:f38dfcd7-42f4-4875-a6fb-9326c08a55b5@.i72g2000hsd.googlegroups.com...
>> > On Jan 10, 11:04 pm, "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com>
>> > wrote:
>> >> What exactly do you mean by files and 15,000 query? And if sql
>> >> server
>> >> is
>> >> using all 16GB then you have a problem with your configuration. You
>> >> need
>> >> to
>> >> set a MAX Memory setting of no more than 14GB to leave room for the OS
>> >> to
>> >> do
>> >> it's job.
>> >> --
>> >> Andrew J. Kelly SQL MVP
>> >> Solid Quality Mentors
>> >> <shahdha...@.gmail.com> wrote in message
>> >>news:e1cf9402-53a0-4cfc-932d-cb98811fe848@.i7g2000prf.googlegroups.com...
>> >> > Hi all gurus
>> >> > I am using windows 2003 sp2 with SQL Server 2005 SP2. Its a 16 GB
>> >> > ram , intel xeon 8 processor machine.
>> >> > We have 2 files file1 with 15,000 query and file2 with 60,000 query.
>> >> > if run file2 alone it takes first 6 mins and if i run it second time
>> >> > it takes 2 mins.
>> >> > so this time it uses caches and returns result in 2 mins
>> >> > So its taking benefit of cache.
>> >> > file2 first 5 min
>> >> > file2 second 2 min
>> >> > But when i run file1 it takes 2 minutes and file2 takes 6 mins.
>> >> > After then again if i run file1 again it takes again 2 mins and
>> >> > file2
>> >> > takes 5 mins.
>> >> > memory goes to 10 GB from 16 GB.
>> >> > file1 first 2 min
>> >> > file2 first 5 min
>> >> > file1 second 2 min
>> >> > file2 second 5 min
>> >> > here second time when i run this files it doesn't take benefit of
>> >> > cache and take same time
>> >> > so why when i run both files sequentially 2 times 2nd time its not
>> >> > taking benefit of caching even 6 GB of memory available.If there is
>> >> > 6
>> >> > GB memory available then it should keep all data in cache and give
>> >> > fast results second time.
>> >> > Is there any limitation for caching or query plan? running 2
>> >> > different
>> >> > files will drop cache?
>> > Hi Andrew
>> > there are about 15000 select queries in first file and 60000 select
>> > queries in second file. And MS is not using more than 10 GB.
>> > Problem is that if i run only one file 2 times then at second time is
>> > uses cache, but if I run file sequentially 2 time then its not using
>> > cache.
>> > why its doing so?
> sorry there is a typo. from 16 gb memory 10 gb is used. so i think if
> 6 gb is available , I got this info from task manager it showes 10 gb
> uses, then in second case when 2 files are sequentially run 2 times
> then why its not using cache 2nd times?
>
Saturday, February 25, 2012
CAB Files
Can anyone Please point me to a place that I can find some documentation on what the sql.ppc.wce5.armv4i.CAB does?
I basically need to know which files are being placed in the handset by deplyoing the CAB file.
Thanks,
Farnaz
The cab file contents are documented here: http://www.microsoft.com/sql/editions/sqlmobile/installsdk.mspxC2 Auditing: How to change to directory to save log files
I do not have any space on C drive which according to the articles (and bol)
is the default place.
How I can change the directory so these traces can be save into the location
I want.
Any solid experience anyone has on C2?
Thank you,
ktf
Hi
C2 auditing writes to \mssql\data directory for default instances of SQL
Server 2000, or the \mssql$instancename\data directory for named instances.
If you change the default database data location in EM, or
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\ MSSQLServer\DefaultData,
it will place them there.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"ktf" <ktf@.discussions.microsoft.com> wrote in message
news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> I need to turn on C2 auditing on a sql 2K cluster server.
> I do not have any space on C drive which according to the articles (and
bol)
> is the default place.
> How I can change the directory so these traces can be save into the
location
> I want.
> Any solid experience anyone has on C2?
> Thank you,
> ktf
>
|||Well Mr. Epprecht,
I have C and D drive which are the local drives and S and R from the SAN.
The only location that I can is on S drive. The registry setting is deferent.
Is there a specific registry object for that mater? I can try to run it and
see where it is saving it but is there a specific place that I can change the
setting that is for C2 ONLY?
Other question I have a passive site for disaster recovery (I am not talking
about passive node) do I have to replicate the trace data? The passive site
has a different virtual server name and ip address. As you know the sql is
sleep but I have a third party replication system going. Do you know if I
replace the trace that will be readable at the passive site?
Do you recommend any site for anything on C2?
Thank you,
ktf
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> C2 auditing writes to \mssql\data directory for default instances of SQL
> Server 2000, or the \mssql$instancename\data directory for named instances.
> If you change the default database data location in EM, or
> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\ MSSQLServer\DefaultData,
> it will place them there.
> Regards
> --
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "ktf" <ktf@.discussions.microsoft.com> wrote in message
> news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> bol)
> location
>
>
C2 Auditing: How to change to directory to save log files
I do not have any space on C drive which according to the articles (and bol)
is the default place.
How I can change the directory so these traces can be save into the location
I want.
Any solid experience anyone has on C2?
Thank you,
ktfHi
C2 auditing writes to \mssql\data directory for default instances of SQL
Server 2000, or the \mssql$instancename\data directory for named instances.
If you change the default database data location in EM, or
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\DefaultData,
it will place them there.
Regards
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"ktf" <ktf@.discussions.microsoft.com> wrote in message
news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> I need to turn on C2 auditing on a sql 2K cluster server.
> I do not have any space on C drive which according to the articles (and
bol)
> is the default place.
> How I can change the directory so these traces can be save into the
location
> I want.
> Any solid experience anyone has on C2?
> Thank you,
> ktf
>|||Well Mr. Epprecht,
I have C and D drive which are the local drives and S and R from the SAN.
The only location that I can is on S drive. The registry setting is deferent.
Is there a specific registry object for that mater? I can try to run it and
see where it is saving it but is there a specific place that I can change the
setting that is for C2 ONLY?
Other question I have a passive site for disaster recovery (I am not talking
about passive node) do I have to replicate the trace data? The passive site
has a different virtual server name and ip address. As you know the sql is
sleep but I have a third party replication system going. Do you know if I
replace the trace that will be readable at the passive site?
Do you recommend any site for anything on C2?
Thank you,
ktf
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> C2 auditing writes to \mssql\data directory for default instances of SQL
> Server 2000, or the \mssql$instancename\data directory for named instances.
> If you change the default database data location in EM, or
> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\DefaultData,
> it will place them there.
> Regards
> --
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "ktf" <ktf@.discussions.microsoft.com> wrote in message
> news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> >
> > I need to turn on C2 auditing on a sql 2K cluster server.
> > I do not have any space on C drive which according to the articles (and
> bol)
> > is the default place.
> > How I can change the directory so these traces can be save into the
> location
> > I want.
> > Any solid experience anyone has on C2?
> >
> > Thank you,
> > ktf
> >
>
>
C2 Auditing: How to change to directory to save log files
I do not have any space on C drive which according to the articles (and bol)
is the default place.
How I can change the directory so these traces can be save into the location
I want.
Any solid experience anyone has on C2?
Thank you,
ktfHi
C2 auditing writes to \mssql\data directory for default instances of SQL
Server 2000, or the \mssql$instancename\data directory for named instances.
If you change the default database data location in EM, or
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MS
SQLServer\MSSQLServer\DefaultData,
it will place them there.
Regards
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"ktf" <ktf@.discussions.microsoft.com> wrote in message
news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> I need to turn on C2 auditing on a sql 2K cluster server.
> I do not have any space on C drive which according to the articles (and
bol)
> is the default place.
> How I can change the directory so these traces can be save into the
location
> I want.
> Any solid experience anyone has on C2?
> Thank you,
> ktf
>|||Well Mr. Epprecht,
I have C and D drive which are the local drives and S and R from the SAN.
The only location that I can is on S drive. The registry setting is deferent
.
Is there a specific registry object for that mater? I can try to run it and
see where it is saving it but is there a specific place that I can change th
e
setting that is for C2 ONLY?
Other question I have a passive site for disaster recovery (I am not talking
about passive node) do I have to replicate the trace data? The passive site
has a different virtual server name and ip address. As you know the sql is
sleep but I have a third party replication system going. Do you know if I
replace the trace that will be readable at the passive site?
Do you recommend any site for anything on C2?
Thank you,
ktf
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> C2 auditing writes to \mssql\data directory for default instances of SQL
> Server 2000, or the \mssql$instancename\data directory for named instances
.
> If you change the default database data location in EM, or
> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MS
SQLServer\MSSQLServer\DefaultData,
> it will place them there.
> Regards
> --
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "ktf" <ktf@.discussions.microsoft.com> wrote in message
> news:D1ACF0EF-410E-4306-B826-17C76081C275@.microsoft.com...
> bol)
> location
>
>
C2 auditing path?
The files will be in the default data directory which you can change.
John
"Artificer" wrote:
> How can I specify the c2 auditing files path?
>|||How to specify that?
"John Bell" wrote:
> Hi
> The files will be in the default data directory which you can change.
> John
> "Artificer" wrote:
> > How can I specify the c2 auditing files path?
> >
> >|||Hi
The easiest way is to change it on the database tab of the server properties
dialog in Enterprise Manager and restart SQL Server.
John
"Joseph" wrote:
> How to specify that?
> "John Bell" wrote:
> > Hi
> >
> > The files will be in the default data directory which you can change.
> >
> > John
> >
> > "Artificer" wrote:
> >
> > > How can I specify the c2 auditing files path?
> > >
> > >|||If the database tab of the server properties is changed, what will the new
default database destination be? The one described in the database tab or the
one of the model db?
Î? Ï?Ï?ήÏ?Ï?ηÏ? "John Bell" ÎγγÏ?αÏ?ε:
> Hi
> The easiest way is to change it on the database tab of the server properties
> dialog in Enterprise Manager and restart SQL Server.
> John
> "Joseph" wrote:
> > How to specify that?
> >
> > "John Bell" wrote:
> >
> > > Hi
> > >
> > > The files will be in the default data directory which you can change.
> > >
> > > John
> > >
> > > "Artificer" wrote:
> > >
> > > > How can I specify the c2 auditing files path?
> > > >
> > > >|||Hi
New databases data files will be created in the directory that you have
chanegd on the database tab of the server properties dialog. This directory
does not have to be the same location as the system databases.
If you don't want to create the data and log files in the default
directories you can either overwrite them in the new database dialogs data
file and log file tabs or use Query analyser to specify explicitly the CREATE
DATABASE statement.
John
"blackpit" wrote:
> If the database tab of the server properties is changed, what will the new
> default database destination be? The one described in the database tab or the
> one of the model db?
> Î? Ï?Ï?ήÏ?Ï?ηÏ? "John Bell" ÎγγÏ?αÏ?ε:
> > Hi
> >
> > The easiest way is to change it on the database tab of the server properties
> > dialog in Enterprise Manager and restart SQL Server.
> >
> > John
> >
> > "Joseph" wrote:
> >
> > > How to specify that?
> > >
> > > "John Bell" wrote:
> > >
> > > > Hi
> > > >
> > > > The files will be in the default data directory which you can change.
> > > >
> > > > John
> > > >
> > > > "Artificer" wrote:
> > > >
> > > > > How can I specify the c2 auditing files path?
> > > > >
> > > > >|||It is clear.
Thank you
Î? Ï?Ï?ήÏ?Ï?ηÏ? "John Bell" ÎγγÏ?αÏ?ε:
> Hi
> New databases data files will be created in the directory that you have
> chanegd on the database tab of the server properties dialog. This directory
> does not have to be the same location as the system databases.
> If you don't want to create the data and log files in the default
> directories you can either overwrite them in the new database dialogs data
> file and log file tabs or use Query analyser to specify explicitly the CREATE
> DATABASE statement.
> John
> "blackpit" wrote:
> > If the database tab of the server properties is changed, what will the new
> > default database destination be? The one described in the database tab or the
> > one of the model db?
> >
> > Î? Ï?Ï?ήÏ?Ï?ηÏ? "John Bell" ÎγγÏ?αÏ?ε:
> >
> > > Hi
> > >
> > > The easiest way is to change it on the database tab of the server properties
> > > dialog in Enterprise Manager and restart SQL Server.
> > >
> > > John
> > >
> > > "Joseph" wrote:
> > >
> > > > How to specify that?
> > > >
> > > > "John Bell" wrote:
> > > >
> > > > > Hi
> > > > >
> > > > > The files will be in the default data directory which you can change.
> > > > >
> > > > > John
> > > > >
> > > > > "Artificer" wrote:
> > > > >
> > > > > > How can I specify the c2 auditing files path?
> > > > > >
> > > > > >|||Thanks!
C2 auditing path?
The files will be in the default data directory which you can change.
John
"Artificer" wrote:
> How can I specify the c2 auditing files path?
>|||How to specify that?
"John Bell" wrote:
[vbcol=seagreen]
> Hi
> The files will be in the default data directory which you can change.
> John
> "Artificer" wrote:
>|||Hi
The easiest way is to change it on the database tab of the server properties
dialog in Enterprise Manager and restart SQL Server.
John
"Joseph" wrote:
[vbcol=seagreen]
> How to specify that?
> "John Bell" wrote:
>|||If the database tab of the server properties is changed, what will the new
default database destination be? The one described in the database tab or th
e
one of the model db?
Ο χρ?στη? "John Bell" _γγραψε:
[vbcol=seagreen]
> Hi
> The easiest way is to change it on the database tab of the server properti
es
> dialog in Enterprise Manager and restart SQL Server.
> John
> "Joseph" wrote:
>|||Hi
New databases data files will be created in the directory that you have
chanegd on the database tab of the server properties dialog. This directory
does not have to be the same location as the system databases.
If you don't want to create the data and log files in the default
directories you can either overwrite them in the new database dialogs data
file and log file tabs or use Query analyser to specify explicitly the CREAT
E
DATABASE statement.
John
"blackpit" wrote:
[vbcol=seagreen]
> If the database tab of the server properties is changed, what will the new
> default database destination be? The one described in the database tab or
the
> one of the model db?
> Ο χρ?στη? "John Bell" _γγραψε:
>|||It is clear.
Thank you
Ο χρ?στη? "John Bell" _γγραψε:
[vbcol=seagreen]
> Hi
> New databases data files will be created in the directory that you have
> chanegd on the database tab of the server properties dialog. This director
y
> does not have to be the same location as the system databases.
> If you don't want to create the data and log files in the default
> directories you can either overwrite them in the new database dialogs data
> file and log file tabs or use Query analyser to specify explicitly the CRE
ATE
> DATABASE statement.
> John
> "blackpit" wrote:
>|||Thanks!
C2 auditing ?
I just inherited a data farm and I noticed that the person before me had c2
auditing turned on.
I noticed that the trc files date only back to 04/05 and that the last time
a fiel has been modified is yesterday.
the rate of growing out of 200mb should make these files date of once a day
or even 2. does anybody know how I can check if c2 is still turned on ? or
any idea why that is ? I suppoe i can wait another couple of days and see
but ...
thanks
>> does anybody know how I can check if c2 is still turned on ?
You can use sp_configure look for the running value for c2 audit mode.
Anith
C2 auditing ?
I just inherited a data farm and I noticed that the person before me had c2
auditing turned on.
I noticed that the trc files date only back to 04/05 and that the last time
a fiel has been modified is yesterday.
the rate of growing out of 200mb should make these files date of once a day
or even 2. does anybody know how I can check if c2 is still turned on ? or
any idea why that is ? I suppoe i can wait another couple of days and see
but ...
thanks>> does anybody know how I can check if c2 is still turned on ?
You can use sp_configure look for the running value for c2 audit mode.
--
Anith
C2 auditing ?
I just inherited a data farm and I noticed that the person before me had c2
auditing turned on.
I noticed that the trc files date only back to 04/05 and that the last time
a fiel has been modified is yesterday.
the rate of growing out of 200mb should make these files date of once a day
or even 2. does anybody know how I can check if c2 is still turned on ? or
any idea why that is ? I suppoe i can wait another couple of days and see
but ...
thanks>> does anybody know how I can check if c2 is still turned on ?
You can use sp_configure look for the running value for c2 audit mode.
Anith
C:\Program Files\Microsoft SQL Server\90\DTS\ folder missing
I found it. Our packager must have customized the SQL package so that this dir resides at -
C:\MSSQL9Data\90\DTS\Binn.
C:\Program Files\Microsoft SQL Server\90\DTS\ folder missing
I found it. Our packager must have customized the SQL package so that this dir resides at -
C:\MSSQL9Data\90\DTS\Binn.
Sunday, February 19, 2012
C# application - method to import delimeted CSV to sql server table
Hi,
I'm writing a telecom billing system where it is necessary to import csv delimited files into a sql server table.
1) The csv files to import differ in column arrangement i.e SQL server table arrangement = A, B, C, D whilst CDR = - , A, -, C, D
2) The csv files are delimited via different characters i.e some are del.. by commas some by semi colons
I can import a CSV file from my application using a BULK INSERT query but only using CSV that resembles the structure of the table exactly
How can I pass a CSV to the table and state where I want each CSV column to fill a column in the SQL SERVER table
Below is how I am managing to pass a CDR to the SQL table that I have pre manufactured to resemble the structure of the SQL table Is there a way I can state which values from the CSV I want to pass to corresponding columns in the SQL table:
string conString =@."Provider=SQLOLEDB;Server=(local);Database=Billing;Uid=sa;Pwd=sa;";
for (int i = listBox1.Items.Count - 1; i >= 0; i--)
{
string strSQL;
OleDbConnection objConnection =null;
OleDbCommand objCmd =null;
objConnection =newOleDbConnection(conString);
string cdr = listBox1.GetItemText(listBox1.Items[i]);
strSQL ="BULK INSERT Campaign FROM '"+ cdr +"' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')";
OleDbDataReader objDataReader;
try
{
objConnection.Open();
objCmd =
newOleDbCommand(strSQL, objConnection);objCmd.ExecuteNonQuery();
objCmd =
null;objDataReader =
null;objConnection.Close();
listBox1.Items.Remove(listBox1.Items[i]);
}
catch (Exception ex){
objConnection.Close();
}
objCmd =
null;objConnection =
null;}
Regards
Simon
It is not practical to use ExecuteNonQuery to import a CVS file because the data needs to be cleaned before insert so try the link below for a step by step guide to import the data into SQL Server. The reason it is failing is you are sending ADO.NET a data access component to perform ETL(extraction transformation and loading) task. Hope this helps.
http://www.databasejournal.com/features/mssql/article.php/3580216
|||the code works but only for csv that mirrors the table in structure, I want to create a query that picks certain columns from the csv parses them and pastes them into a table
regards
Simon
|||If the columns are the same that is a standard insert, picking columns is again ETL, try the CVS parser from the link below and ADO.NET is not used. Hope this helps.
http://www.heikniemi.net/hc/archives/000152.html
|||I'll give it a go and get back to you seems a good way around it. Can I use it to iterate over each seperate line and load the values i need into a temporary dataset I.e containing a pseudo table mirroring that in sql server
Simon
C drive not big enough
I am installing the X86 Executable version of SQL Server. I copied it to my desktop and clicked RUN. It extracts the files and then the Install Wizard tells me there is not enough space on the C Drive to load the program. I have 23 gigabits of free space and the program only takes about 900 megabits.
Is there a solution to this problem?
Thanks,
Modez
Hi Modez.
Is this Sql 2000 or Sql 2005?
Try creating a 'dummy' text file that is a few hundred mb in size or more (just create some garbage in the file and copy/paste it until the file becomes large enough), then try running the setup. If that works, you can delete the garbage file on completion.
Repost if you continue to have troubles,
HTH
|||Chad:
It is 2005. I tried making a txt. file and got the same message. Then I tried to download it again and run it from the web site instead of saving it to my desktop but it stills says my C: drive is not big enough.
Do you have any other suggestions?
Thanks,
Modez
Tuesday, February 14, 2012
buy a clue as to how to "iterate sideways"
hi
i am having a hard time with two kinds of text files that have kind of 'repeating groups' in them...i want to loop it, but dont know how.
one is a text file with a record length of 1200 bytes, but all 95601records are all on one row with no lf, cr or anything else between them, so i cannot feature how to get the forEach container to chop of a Right of claimchunk of 1200 bytes at a time, then go get the next 1200 bytes, because the items aren't stacked, they are adjacent to each other, if you see what i mean.
the other text file has a record lenght of 52 bytes with 28 bytes filler, but this file also goes 'down and across', meaning that here, there are fourteen 'rows' in the file, and they have thousands of lines too, so this one also has to consume all the columns on the row before it moves to the next row.
am i making this harder than it needs to be?
thanks for any light
This file is a "fixed-width" file. Setup a flat file connection manager, set it to fixed width, and then define your columns appropriately. Or define one column, of 1200 bytes long, to read in each record as one field. Up to you. From there, you have several options.|||thanks very much for your reply
yes, it is a fixed length file in the sense that the columns contained in each 1200 bytes occurr in the same place, but this also means that the entire 358MB of the file is on one line, so to appropriately define my columns means to create one column for each of the 96000+ records contained on that line in the file, so the structure of the data is not exactly what the fixed length file transform envisions, because in that assumption is an implicit end of line or end of record or some physical new line in the file...i dont have that.
So what i was imagining was using a ForEach loop iterator construct to get the loop to consume the line 1200 bytes at a time (how? i dont know) and push it into a variable and pipe that varialbe into a derived column (does it demand a table or can it work with a variable?) that would 'stack up' the 1200 byte package into the fixed length format described above, and then use that to finally parse the file into columns using the fixed length file as described, but my problem is how to recognize 200 bytes as a record first. if i define a 1200 byte fixed length file, i get exactly one record, the rest of the line is thrown out, because it doesnt fit into 'Column 0'.
thanks again
drew
|||A fixed width file does not have to use row delimiter. So if I have this data:
1 2 3
1 2 3
and I defined it as a fixed width file with three columns of 1 character each, the format SSIS expects is:
123123
Fixed width with row delimiters are actually considered ragged right by SSIS.
You shouldn't have to use a ForEach to read this - the flat file connection manager set to fixed-width should work fine.
|||it worked great...please forgive me for being dense.Friday, February 10, 2012
BURST Support
Does anyone knows if Reporting Services support BURST execution of a Report.
That is ONE Report execution will generate SEVERAL output files (Excel for
exemple)
based on User data perimeter or rights.
Many thanks.No it does not.
--
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Jean-Marc PUGIN" <jmpugin@.mobileworkers.com> wrote in message
news:Olw7mKNyEHA.4028@.TK2MSFTNGP15.phx.gbl...
> Hi,
> Does anyone knows if Reporting Services support BURST execution of a
Report.
> That is ONE Report execution will generate SEVERAL output files (Excel for
> exemple)
> based on User data perimeter or rights.
> Many thanks.
>|||Writing a custom delivery extension is always an option :-)
--
Hope this helps.
---
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
---
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:uQQPjqNyEHA.3908@.TK2MSFTNGP12.phx.gbl...
> No it does not.
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Jean-Marc PUGIN" <jmpugin@.mobileworkers.com> wrote in message
> news:Olw7mKNyEHA.4028@.TK2MSFTNGP15.phx.gbl...
> > Hi,
> >
> > Does anyone knows if Reporting Services support BURST execution of a
> Report.
> >
> > That is ONE Report execution will generate SEVERAL output files (Excel
for
> > exemple)
> > based on User data perimeter or rights.
> >
> > Many thanks.
> >
> >
>|||Yes it can do. But you are going to have to do a little work.
Firstly when a Report is executed all the data is collected from the data
source(s) along with the definition of the report and combined to produce an
executed representation of the report which is stored in "session" called
Intermediary Format, until you manually flush the session or until it times
out.
You can include in the URL request for a report from the Report Server the
SessionID as well as the format, or if using the SOAP interface then you can
also specify the SessionID.
What this means is that you can get several different rendering from exactly
one execution of the Report against its data sources - which means that your
Excel, PDF, HTML versions of a report can all be consistent with each
other - something which is essential in systems where the data is pretty
volatile.
We discuss SessionID in our book "The Hitchhiker's Guide to SQL Server 2000
Reporting Services"
BTW: If you were meaning to ask that by clicking on an object in the Report
Manager - can you also do a BURST - then the answer to that is also yes -
you just need to Customise the Report Manager - and we also discuss that in
the book.
Peter Blackburn
Windows Server Systems - SQL Server MVP
Hitchhiker's Guide to SQL Server Reporting Services
http://www.sqlreportingservices.net
"Jean-Marc PUGIN" <jmpugin@.mobileworkers.com> wrote in message
news:Olw7mKNyEHA.4028@.TK2MSFTNGP15.phx.gbl...
> Hi,
> Does anyone knows if Reporting Services support BURST execution of a
> Report.
> That is ONE Report execution will generate SEVERAL output files (Excel for
> exemple)
> based on User data perimeter or rights.
> Many thanks.
>
Bulkload XML to SQL2005
I have hundreds of large (500meg) XML files I need to upload into SQL2005. I followed the example from http://support.microsoft.com/default.aspx/kb/316005/en-us with no major problems. Unfortunatly my data can be several fields deep within the xml file (sample below).
I'm created a vbs file to process the bulkload.
I'm not able to figure out how to create the mapping (schema) for this file structure. I'm trying to use a mapping schema simular to what is used in the example.
How do I modify the mapping schema for my format?
Please help educate me. I am a noob, so please keep it simple.
Thanks
Charles W
XML File
<ROOT>
<Customers>
<CustomerId><IDno><pdat>2111</pdat></IDno></CustomerId>
<CompanyName><pdat>2Sean Chai</pdat></CompanyName>
<City><pdat>NY</pdat></City>
</Customers>
<Customers>
<CustomerId><IDno><pdat>2112</pdat></IDno></CustomerId>
<CompanyName><pdat>2Tom Johnston</pdat></CompanyName>
<City><pdat>LA</pdat></City>
</Customers>
<Customers>
<CustomerId><IDno><pdat>2113</pdat></IDno></CustomerId>
<CompanyName><pdat>2Institute of Art 3</pdat></CompanyName>
</Customers>
</ROOT>
XSD File
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
xmlns:sql="urn:schemas-microsoft-com:xml-sql" >
<xs:element name="ROOT" sql:is-constant="1">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded" ref="Customers"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Customers" sql:relation="Customer">
<xs:complexType>
<xs:sequence>
<xs:element ref="CustomerId" sql:field="CustomerId" />
<xs:element ref="CompanyName" sql:field="CompanyName" />
<xs:element minOccurs="0" ref="City" sql:field="City"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="CustomerId" type="xs:integer"/>
<xs:element name="CompanyName" type="xs:string"/>
<xs:element name="City" type="xs:NCName"/>
</xs:schema>
VBS
Set objBL = CreateObject("SQLXMLBulkLoad.SQLXMLBulkLoad")
objBL.ConnectionString = "provider=SQLOLEDB.1;data source=12T4581-CHUCK;database=MyDatabase;uid=XMLtest;pwd=XMLtest"
objBL.ErrorLogFile = "c:\error.log"
objBL.Execute "c:\customer3.xsd", "c:\customers3.xml"
Set objBL = Nothing
FYI:
To solve my problem, I created a .net application to modify my xml file into a format that I can work with. It really turned out to be the best solution for me because I have to make some other modifications.
I can take a 600k xml file and stip out the data I need in under 1 minute. At this rate I can prep an entire years worth of data in under 15 minutes.
Now I need to master XSD files.....
Charles W
|||This is the recommended way to achieve your mapping. Bulkload would not support your shape of having multiple layers of wrappers around your scalar type. So Bulkload would support:
<City>NY</City>
But not:
<City><pdat>NY</pdat></City>
Bulkload XML to SQL2005
I have hundreds of large (500meg) XML files I need to upload into SQL2005. I followed the example from http://support.microsoft.com/default.aspx/kb/316005/en-us with no major problems. Unfortunatly my data can be several fields deep within the xml file (sample below).
I'm created a vbs file to process the bulkload.
I'm not able to figure out how to create the mapping (schema) for this file structure. I'm trying to use a mapping schema simular to what is used in the example.
How do I modify the mapping schema for my format?
Please help educate me. I am a noob, so please keep it simple.
Thanks
Charles W
XML File
<ROOT>
<Customers>
<CustomerId><IDno><pdat>2111</pdat></IDno></CustomerId>
<CompanyName><pdat>2Sean Chai</pdat></CompanyName>
<City><pdat>NY</pdat></City>
</Customers>
<Customers>
<CustomerId><IDno><pdat>2112</pdat></IDno></CustomerId>
<CompanyName><pdat>2Tom Johnston</pdat></CompanyName>
<City><pdat>LA</pdat></City>
</Customers>
<Customers>
<CustomerId><IDno><pdat>2113</pdat></IDno></CustomerId>
<CompanyName><pdat>2Institute of Art 3</pdat></CompanyName>
</Customers>
</ROOT>
XSD File
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
xmlns:sql="urn:schemas-microsoft-com:xml-sql" >
<xs:element name="ROOT" sql:is-constant="1">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded" ref="Customers"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Customers" sql:relation="Customer">
<xs:complexType>
<xs:sequence>
<xs:element ref="CustomerId" sql:field="CustomerId" />
<xs:element ref="CompanyName" sql:field="CompanyName" />
<xs:element minOccurs="0" ref="City" sql:field="City"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="CustomerId" type="xs:integer"/>
<xs:element name="CompanyName" type="xs:string"/>
<xs:element name="City" type="xs:NCName"/>
</xs:schema>
VBS
Set objBL = CreateObject("SQLXMLBulkLoad.SQLXMLBulkLoad")
objBL.ConnectionString = "provider=SQLOLEDB.1;data source=12T4581-CHUCK;database=MyDatabase;uid=XMLtest;pwd=XMLtest"
objBL.ErrorLogFile = "c:\error.log"
objBL.Execute "c:\customer3.xsd", "c:\customers3.xml"
Set objBL = Nothing
FYI:
To solve my problem, I created a .net application to modify my xml file into a format that I can work with. It really turned out to be the best solution for me because I have to make some other modifications.
I can take a 600k xml file and stip out the data I need in under 1 minute. At this rate I can prep an entire years worth of data in under 15 minutes.
Now I need to master XSD files.....
Charles W
|||This is the recommended way to achieve your mapping. Bulkload would not support your shape of having multiple layers of wrappers around your scalar type. So Bulkload would support:
<City>NY</City>
But not:
<City><pdat>NY</pdat></City>
bulkload xml file with declared external dtd file
The xml files declares an external dtd.
I must use the CONVERT option 2 to avoid an internal dtd error.
When I try to read the column, sql throws a Lists of BinaryXml value tokens not supported error because xml use special entity characters was declared in dtd.
Is there a solution for resolve using the dtd like a schema?
Paolo
Paolo,
Regarding DTD processing for imposing constraints as XSD does, no, there is no option in SQL Server 2005 to cause this validation. The extent of our DTD processing is to verify that internal subsets are syntactically correct, perform entity expansion, and apply attribute default values. We do not perform validation of the xml document per the DTD constraints.
For the error you are receiving, I am unable to reproduce this with a simple test. Please let me know if the following example works for you. If so, perhaps you can elaborate on what is different with what you are attempting.
(1)Create file on disk (c:\temp\dtd1.xml) with contents like:
<!DOCTYPE DOC SYSTEM "C:\MyExternalDTD.xml" [<!ATTLIST elem1 attr1 CDATA "defVal1">]><elem1><childelem1/></elem1>
(2)Execute following tsql to load and query out the document:
CREATE TABLE t1 (xmlCol XML)
go
INSERT t1
SELECT
CONVERT(XML, BulkColumn, 2)
FROM
OPENROWSET (BULK 'c:\temp\dtd1.xml', SINGLE_BLOB) orset(BulkColumn)
go
SELECT * FROM t1
go
Regards,
Adrian Hains