Friday, March 30, 2012
Reading a log file
Is it possible to read a log file to see what changes have been done on a
particular table at a particular time? And if yes, how?
Thanks,
Ivan> Is it possible to read a log file to see what changes have been done on a
> particular table at a particular time? And if yes, how?
http://www.aspfaq.com/2449|||Keep in mind that transaction logs are truncated during backups.
"Ivan Debono" <ivanmdeb@.hotmail.com> wrote in message
news:OsmBwZfjFHA.1968@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> Is it possible to read a log file to see what changes have been done on a
> particular table at a particular time? And if yes, how?
> Thanks,
> Ivan
>
Wednesday, March 28, 2012
Read/Write statistics, per table?
each table, for a particular time period, without profiling the db?
Thanks again!
In SQL Server 2005, you can pull at least some form of this information from
sys.dm_db_index_usage_stats. In 2000, I don't think you can get much of
anything.
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"Derrick" <derrick1298@.excite.com> wrote in message
news:u1XMdROuHHA.3544@.TK2MSFTNGP03.phx.gbl...
> And one other Q, is there any way to get the number of reads/writes, one
> each table, for a particular time period, without profiling the db?
> Thanks again!
>
Read/Write statistics, per table?
each table, for a particular time period, without profiling the db?
Thanks again!In SQL Server 2005, you can pull at least some form of this information from
sys.dm_db_index_usage_stats. In 2000, I don't think you can get much of
anything.
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"Derrick" <derrick1298@.excite.com> wrote in message
news:u1XMdROuHHA.3544@.TK2MSFTNGP03.phx.gbl...
> And one other Q, is there any way to get the number of reads/writes, one
> each table, for a particular time period, without profiling the db?
> Thanks again!
>
Read/Write statistics, per table?
each table, for a particular time period, without profiling the db?
Thanks again!In SQL Server 2005, you can pull at least some form of this information from
sys.dm_db_index_usage_stats. In 2000, I don't think you can get much of
anything.
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"Derrick" <derrick1298@.excite.com> wrote in message
news:u1XMdROuHHA.3544@.TK2MSFTNGP03.phx.gbl...
> And one other Q, is there any way to get the number of reads/writes, one
> each table, for a particular time period, without profiling the db?
> Thanks again!
>
Read/edit history
way for me to determine the last time they were accessed or written to by a
user or application?
Any and all contributions are greatly appreciated ...
Regards TJHi,
You have to either,
1. Write triggers to audit the activity
2. Enable the SQL Profiler.
FYI, You can make use of Sysprocesses table for online users, but once he
log off the entry will be removed.
Thanks
Hri
MCDBA
"TJ" <nospam@.nowhere.com> wrote in message
news:u3Hy83i2DHA.4060@.TK2MSFTNGP11.phx.gbl...
quote:
> I have inherited several SQL 7 database and would like to know is there a
> way for me to determine the last time they were accessed or written to by
a
quote:|||Thx so much!
> user or application?
>
> --
> Any and all contributions are greatly appreciated ...
> Regards TJ
>
>
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:#8l9Fhl2DHA.556@.TK2MSFTNGP11.phx.gbl...
quote:
> Hi,
> You have to either,
> 1. Write triggers to audit the activity
> 2. Enable the SQL Profiler.
> FYI, You can make use of Sysprocesses table for online users, but once he
> log off the entry will be removed.
> Thanks
> Hri
> MCDBA
> "TJ" <nospam@.nowhere.com> wrote in message
> news:u3Hy83i2DHA.4060@.TK2MSFTNGP11.phx.gbl...
a[QUOTE]
by[QUOTE]
> a
>
Read/edit history
way for me to determine the last time they were accessed or written to by a
user or application?
--
Any and all contributions are greatly appreciated ...
Regards TJHi,
You have to either,
1. Write triggers to audit the activity
2. Enable the SQL Profiler.
FYI, You can make use of Sysprocesses table for online users, but once he
log off the entry will be removed.
Thanks
Hri
MCDBA
"TJ" <nospam@.nowhere.com> wrote in message
news:u3Hy83i2DHA.4060@.TK2MSFTNGP11.phx.gbl...
> I have inherited several SQL 7 database and would like to know is there a
> way for me to determine the last time they were accessed or written to by
a
> user or application?
>
> --
> Any and all contributions are greatly appreciated ...
> Regards TJ
>
>|||Thx so much!
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:#8l9Fhl2DHA.556@.TK2MSFTNGP11.phx.gbl...
> Hi,
> You have to either,
> 1. Write triggers to audit the activity
> 2. Enable the SQL Profiler.
> FYI, You can make use of Sysprocesses table for online users, but once he
> log off the entry will be removed.
> Thanks
> Hri
> MCDBA
> "TJ" <nospam@.nowhere.com> wrote in message
> news:u3Hy83i2DHA.4060@.TK2MSFTNGP11.phx.gbl...
> > I have inherited several SQL 7 database and would like to know is there
a
> > way for me to determine the last time they were accessed or written to
by
> a
> > user or application?
> >
> >
> > --
> > Any and all contributions are greatly appreciated ...
> > Regards TJ
> >
> >
> >
> >
>sql
Monday, March 26, 2012
READ Only Cursor
run it now I get the follwoing message:
FOR UPDATE cannot be specified on a READ ONLY cursor.
How do I resolve this?
declare export_cursor cursor for
select [RecordKey]
from [ExportData]
for updateTry,
declare export_cursor cursor
SCROLL_LOCKS
for
select [RecordKey]
from [ExportData]
for update
...
AMB
"Emma" wrote:
> I have the following cursor decleration that was working before. Each time
I
> run it now I get the follwoing message:
> FOR UPDATE cannot be specified on a READ ONLY cursor.
> How do I resolve this?
> declare export_cursor cursor for
> select [RecordKey]
> from [ExportData]
> for update|||I tried the SCROLL_LOCKS and it did not work. The way I had it was working
before. Will recreating the database have anything to do with the cursor not
working?
"Alejandro Mesa" wrote:
> Try,
> declare export_cursor cursor
> SCROLL_LOCKS
> for
> select [RecordKey]
> from [ExportData]
> for update
> ...
>
> AMB
> "Emma" wrote:
>|||Have you considered replacing the cursor with set-based code? Cursors
in general are a bad idea. Update cursors are worse.
--
David Portas
SQL Server MVP
--|||Try,
declare export_cursor cursor
KEYSET
for
select [RecordKey]
from [ExportData]
for update
...
AMB
"Emma" wrote:
> I tried the SCROLL_LOCKS and it did not work. The way I had it was working
> before. Will recreating the database have anything to do with the cursor n
ot
> working?
> "Alejandro Mesa" wrote:
>|||I figured it out. The table has to have a unique index in order for the
update cursor to work. The database was being replicated before and I took
replication off and deleted all the rowid’s added by replication. The rowi
d
was being used as the unique index before.
What is set-based code?
"David Portas" wrote:
> Have you considered replacing the cursor with set-based code? Cursors
> in general are a bad idea. Update cursors are worse.
> --
> David Portas
> SQL Server MVP
> --
>|||Set-based code basically means the standard SELECT, UPDATE, DELETE and
INSERT statements. These operate on sets of rows at a time rather than
individual row-by-row processing.
Set-based SQL is generally much more efficient, more concise and easier to
develop and maintain than cursors. Most of the time cursors are unnecessary
and set-based SQL should be your first choice for performing any data
manipuldation.
David Portas
SQL Server MVP
--
Friday, March 23, 2012
Read file info via Transact SQL....
Any ideas how I could go about getting a file's "Created" date/time into a
datetime variable using T-SQL? I'm thinking along the lines of using the
results of a call to xp_cmdshell but as to what command I should call...
well...
Any help would be appreciated!Sure you can use any DOS command via xp_cmdshell. Here is a rough example
of reading datetimes.
CREATE TABLE #dirlist (FName VARCHAR(1000))
-- Insert the results of the dir cmd into a table so we can scan it
INSERT INTO #dirlist (FName)
exec master..xp_cmdshell 'dir /OD C:\Backups\*.trn'
-- Remove the garbage
DELETE #dirlist WHERE
SUBSTRING(FName,1,2) < '00' OR
SUBSTRING(FName,1,2) > '99' OR
FName IS NULL
SELECT SUBSTRING(FName,40,40) AS FName
FROM #dirlist
WHERE CAST(SUBSTRING(FName,1,20) AS DATETIME) < @.DelDate
AND SUBSTRING(FName,40,40) LIKE '%.TRN'
Andrew J. Kelly SQL MVP
"len" <len@.discussions.microsoft.com> wrote in message
news:74125E9D-6E97-46FE-855C-CA2913135FE3@.microsoft.com...
> Hi there.
> Any ideas how I could go about getting a file's "Created" date/time into a
> datetime variable using T-SQL? I'm thinking along the lines of using the
> results of a call to xp_cmdshell but as to what command I should call...
> well...
> Any help would be appreciated!|||Rather than placing this functionality in a stored procedure, consider a DTS
package and VBScript task.
http://msdn.microsoft.com/library/d...
flow_0793.asp
http://www.sqldts.com/?246
http://msdn.microsoft.com/library/d...efc4b9f49c7.asp
"len" <len@.discussions.microsoft.com> wrote in message
news:74125E9D-6E97-46FE-855C-CA2913135FE3@.microsoft.com...
> Hi there.
> Any ideas how I could go about getting a file's "Created" date/time into a
> datetime variable using T-SQL? I'm thinking along the lines of using the
> results of a call to xp_cmdshell but as to what command I should call...
> well...
> Any help would be appreciated!sql
Wednesday, March 21, 2012
Read Consistency
What is the mechanism used by select statements to return point in time data?
I have a test setup - table t1 has 1000000 rows. A query (call q1) that selects all the rows (in NOLOCK mode and process them) takes 10 minutes. At the same time another process inserts another 1000000 rows into the same table t1. As expected, client that issued query q1 sees just 1000000 rows.
My understanding is that NOLOCK does not hold any locks. So, how did SQL Server know that it should not return the rows that are inserted after I issued the query q1? Some explanation or link to some whitepapers would be helpful.
Thanks
Unless you use the row-level versioning that was introduced in SQL Server 2005, there is no notion of 'point-in-time' data for the locking-based database engines for a certain fixed time in the past, until the transaction commits and only for the precise subset of the data read or written by the transaction and of the data that was intended to be read but it didn't exist when the transaction tried to access it.
Instead, the engine can provide an illusion of a serialized execution as if during the processing of transactions the view of the data accessed and touched by the transaction was 'frozen', i.e. the things that other transactions did, either occurred in the past, will occur in the future, or they don't matter if other transaction never read or wrote the data accessed and touched by our transaction. As a result the data that is seen by a single transaction ultimately can be viewed as a consistent slice of some relevant subset of entire database as of 'now' while the transaction is active and as of commit time after the transaction commits.
Having said that I realize that it might sound really cryptic and confusing - but this is how the things are done in the locking-based transaction processing systems. If you come from the Oracle world it will take some time to adjust to a different paradigm.
|||Thanks Tengiz but ...
In my scenario, let us say that query q1 started at 10 AM and finished at 10:10 AM. At 10 AM it had 1000000 rows and by 10:05 AM other transactions inserted 1000000 rows more making it a total of 2000000 rows. Why did not SQL server return 2000000 rows to q1 client? Somehow SQL server knew to provide the rows that existed at 10 AM (that I call point in time data; may be wrong terminology?) and ignore the rows that were added after that point. What is the internal mechanism used by the engine to give that illusion? I am inclined to think that even though it does not create locks it might create some sort of semaphores, latches or tables in memory to keep track of the rows that need to be returned to q1 client. As you guessed, I am from Oracle background; may be you answered my question and it might take little longer to get it.
|||Could you please be more specific describing you scenario? The fact that the query only returned the initial set of rows and didn't see the rows that were inserted after the query started does not really mean that the server somehow knows or takes into account the time when the rows were inserted.
Again, the things are different if you use the row-level versioning - you either do it be switching to the snapshot isolation (after enabling it for the database) or if you allow versioning-based read-committed isolation.
But assuming the you don't use the row-level versioning, depending on the existing indexes, the actual query plans, the key values that existed in the table before the insert and the key values inserted, the select query with the right timing could easily skip the newly inserted records, but it would have nothing to do with the read-consistency provided to the row-level versioning.
|||
Thanks to Tengiz for your interest and perseverance in helping me out.
The database is not setup to use row versioning. Also, this is a data warehouse system. So, DML statements can come only from ETL. No other concurrent user is touching this table while this program is running. However my SSIS program that maintains this table opens two sessions (a reader and a writer) as I explained in step 4. My concern is that these two sessions stepping on each other.
-
Step 0:
-- display isolation level
dbcc useroptions
isolation level = read committed
-
Step 1:
CREATE TABLE Table1 (
Column1[int] NOT NULL,
Column2[int] NOT NULL,
Column3[bit] NOT NULL,
Column4[datetime] NOT NULL,
Column5[int] NOT NULL,
Column6[int] NOT NULL,
Column7[varchar](255) NULL,
Column8[int] NULL,
Column9[int] NULL,
Column10[int] NULL
CONSTRAINT [PKC_REALDB_StatusHistory] PRIMARY KEY CLUSTERED (
[Column1] ASC,
[Column2] ASC,
[Column4] ASC,
[Column5] ASC,
[Column6] ASC )
)
-
Step 2:
INSERT INTO Table1
SELECT *
FROM Source_Table1
10,556,214 rows inserted.
-
Step 3:
-- delete the rows to simulate unexpected results due to dirty reads
delete
from SourceStage.RealDB_StatusHistory
where Column1 % 2 = 0
5,288,119 rows deleted
-
Step 4:
Now, I have an SSIS package that does update else insert operation. It does SSIS left outer merge join to decide update verses insert. Since I use table lock in the destination component, the source component (one that feeds the data from the target table to do merge join), uses NOLOCK hint. The destination component uses fast load with a batch size of 1000 rows. The rows to be updated are saved to a empty intermediate table. A Transact-SQL UPDATE statement is used after the data flow is done to apply those into Table1.
5,288,119 rows inserted
5,268,095 rows updated
-
I repeated the above steps without the clustered index and I get the same row counts. The row counts show no surprises which leads me to believe that there is some kind of lock. I have to make sure this works 100% before I put this code into production. The documentation leads me to believe that it does not work 100% of the time. If that is the case, I should be able to simulate a scenario where I get whacky row counts.
Here is what I think the reason for not getting whacky row counts in my setup. (a) when the clustered index is in place, it rebalances the tree during the delete operation. So the inserted rows go into new pages and SQL Server somehow knows to ignore them. (b) when there is no clustered index, then it sorts the data in Temp database before it is being fed to the SSIS. So data is essentially is sourced from the Temp database during pipeline operation of SSIS. Am I thinking in the right direction?
So the big question is, can I use NOLOCK without any bad effects in this scenario? If you believe this will lead to some dirty read scenario, how can I simulate it?
|||
A quick answer to your question "can I use NOLOCK without any bad effects in this scenario?" is NO. The NOLOCK hint relaxes certain concurrency-related guarantees in the engine and essentially nullifies the notion of transactional consistency. It doesn’t mean that you will never get any consistency if you use NOLOCK, but you will not in general have predictable results.
I'm not an SSIS expert, so I'm not sure how SSIS really performs the 'insert else update' operation and I still don't quite get what you actually do in this scenario. But from the description of it looks like the plan does include spooling in tempdb for sort - the fast load option normally feeds data in through the BCP API which if the destination table is a clustered index assumes that that data needs to be sorted before it gets delivered to the destination. Hence, if the input provided by the SSIS is not sorted (there is a special hint that SSIS can specify in order to avoid extra sort) the then query optimizer adds the sort operator.
Spooling can certainly make it look like there indeed was some kind of 'read consistency' provided, but, again, depending on the actual query and specific conditions the optimizer is free to choose other options as well.
|||Thanks for the reply. Thinking about it further, when I don't use the NOLOCK hint, my SSIS package just waits forever. When I use NOLOCK hint, it seems to work fine. However, what if SQL Server does not honour the NOLOCK hint? My package might wait forever. So, I decided use the Lookup Transformation of SSIS rather than the Merge Join Transformation. Lookup Transformation can cache the data upfront before the Data Flow Task starts prosessing source rows. This way there is no contention.Read Consistency
What is the mechanism used by select statements to return point in time data?
I have a test setup - table t1 has 1000000 rows. A query (call q1) that selects all the rows (in NOLOCK mode and process them) takes 10 minutes. At the same time another process inserts another 1000000 rows into the same table t1. As expected, client that issued query q1 sees just 1000000 rows.
My understanding is that NOLOCK does not hold any locks. So, how did SQL Server know that it should not return the rows that are inserted after I issued the query q1? Some explanation or link to some whitepapers would be helpful.
Thanks
Unless you use the row-level versioning that was introduced in SQL Server 2005, there is no notion of 'point-in-time' data for the locking-based database engines for a certain fixed time in the past, until the transaction commits and only for the precise subset of the data read or written by the transaction and of the data that was intended to be read but it didn't exist when the transaction tried to access it.
Instead, the engine can provide an illusion of a serialized execution as if during the processing of transactions the view of the data accessed and touched by the transaction was 'frozen', i.e. the things that other transactions did, either occurred in the past, will occur in the future, or they don't matter if other transaction never read or wrote the data accessed and touched by our transaction. As a result the data that is seen by a single transaction ultimately can be viewed as a consistent slice of some relevant subset of entire database as of 'now' while the transaction is active and as of commit time after the transaction commits.
Having said that I realize that it might sound really cryptic and confusing - but this is how the things are done in the locking-based transaction processing systems. If you come from the Oracle world it will take some time to adjust to a different paradigm.
|||Thanks Tengiz but ...
In my scenario, let us say that query q1 started at 10 AM and finished at 10:10 AM. At 10 AM it had 1000000 rows and by 10:05 AM other transactions inserted 1000000 rows more making it a total of 2000000 rows. Why did not SQL server return 2000000 rows to q1 client? Somehow SQL server knew to provide the rows that existed at 10 AM (that I call point in time data; may be wrong terminology?) and ignore the rows that were added after that point. What is the internal mechanism used by the engine to give that illusion? I am inclined to think that even though it does not create locks it might create some sort of semaphores, latches or tables in memory to keep track of the rows that need to be returned to q1 client. As you guessed, I am from Oracle background; may be you answered my question and it might take little longer to get it.
|||Could you please be more specific describing you scenario? The fact that the query only returned the initial set of rows and didn't see the rows that were inserted after the query started does not really mean that the server somehow knows or takes into account the time when the rows were inserted.
Again, the things are different if you use the row-level versioning - you either do it be switching to the snapshot isolation (after enabling it for the database) or if you allow versioning-based read-committed isolation.
But assuming the you don't use the row-level versioning, depending on the existing indexes, the actual query plans, the key values that existed in the table before the insert and the key values inserted, the select query with the right timing could easily skip the newly inserted records, but it would have nothing to do with the read-consistency provided to the row-level versioning.
|||
Thanks to Tengiz for your interest and perseverance in helping me out.
The database is not setup to use row versioning. Also, this is a data warehouse system. So, DML statements can come only from ETL. No other concurrent user is touching this table while this program is running. However my SSIS program that maintains this table opens two sessions (a reader and a writer) as I explained in step 4. My concern is that these two sessions stepping on each other.
-
Step 0:
-- display isolation level
dbcc useroptions
isolation level = read committed
-
Step 1:
CREATE TABLE Table1 (
Column1[int] NOT NULL,
Column2[int] NOT NULL,
Column3[bit] NOT NULL,
Column4[datetime] NOT NULL,
Column5[int] NOT NULL,
Column6 [int] NOT NULL,
Column7[varchar](255) NULL,
Column8[int] NULL,
Column9[int] NULL,
Column10[int] NULL
CONSTRAINT [PKC_REALDB_StatusHistory] PRIMARY KEY CLUSTERED (
[Column1] ASC,
[Column2] ASC,
[Column4] ASC,
[Column5] ASC,
[Column6] ASC )
)
-
Step 2:
INSERT INTO Table1
SELECT *
FROM Source_Table1
10,556,214 rows inserted.
-
Step 3:
-- delete the rows to simulate unexpected results due to dirty reads
delete
from SourceStage.RealDB_StatusHistory
where Column1 % 2 = 0
5,288,119 rows deleted
-
Step 4:
Now, I have an SSIS package that does update else insert operation. It does SSIS left outer merge join to decide update verses insert. Since I use table lock in the destination component, the source component (one that feeds the data from the target table to do merge join), uses NOLOCK hint. The destination component uses fast load with a batch size of 1000 rows. The rows to be updated are saved to a empty intermediate table. A Transact-SQL UPDATE statement is used after the data flow is done to apply those into Table1.
5,288,119 rows inserted
5,268,095 rows updated
-
I repeated the above steps without the clustered index and I get the same row counts. The row counts show no surprises which leads me to believe that there is some kind of lock. I have to make sure this works 100% before I put this code into production. The documentation leads me to believe that it does not work 100% of the time. If that is the case, I should be able to simulate a scenario where I get whacky row counts.
Here is what I think the reason for not getting whacky row counts in my setup. (a) when the clustered index is in place, it rebalances the tree during the delete operation. So the inserted rows go into new pages and SQL Server somehow knows to ignore them. (b) when there is no clustered index, then it sorts the data in Temp database before it is being fed to the SSIS. So data is essentially is sourced from the Temp database during pipeline operation of SSIS. Am I thinking in the right direction?
So the big question is, can I use NOLOCK without any bad effects in this scenario? If you believe this will lead to some dirty read scenario, how can I simulate it?
|||
A quick answer to your question "can I use NOLOCK without any bad effects in this scenario?" is NO. The NOLOCK hint relaxes certain concurrency-related guarantees in the engine and essentially nullifies the notion of transactional consistency. It doesn’t mean that you will never get any consistency if you use NOLOCK, but you will not in general have predictable results.
I'm not an SSIS expert, so I'm not sure how SSIS really performs the 'insert else update' operation and I still don't quite get what you actually do in this scenario. But from the description of it looks like the plan does include spooling in tempdb for sort - the fast load option normally feeds data in through the BCP API which if the destination table is a clustered index assumes that that data needs to be sorted before it gets delivered to the destination. Hence, if the input provided by the SSIS is not sorted (there is a special hint that SSIS can specify in order to avoid extra sort) the then query optimizer adds the sort operator.
Spooling can certainly make it look like there indeed was some kind of 'read consistency' provided, but, again, depending on the actual query and specific conditions the optimizer is free to choose other options as well.
|||Thanks for the reply. Thinking about it further, when I don't use the NOLOCK hint, my SSIS package just waits forever. When I use NOLOCK hint, it seems to work fine. However, what if SQL Server does not honour the NOLOCK hint? My package might wait forever. So, I decided use the Lookup Transformation of SSIS rather than the Merge Join Transformation. Lookup Transformation can cache the data upfront before the Data Flow Task starts prosessing source rows. This way there is no contention.sqlTuesday, March 20, 2012
Re: help please
I have 3 column: date, time, stocks name, price
I have 2 questions:
1. what is the command (or query languange) to get the
the first and/or last observations for any given day (i know it can be done in
aggregate query, in Ms acces but can it be done in SQL server query as well?)?
e.g. I want to get the first and last price of the day for any particular stocks
2. how to calculate return with the following formula:
return=log P(t)-log P(t-1), where P(t) is price at
time t say 10 am and P(t-1) is price at one period
previous t say 9 am?
Regards
CharlyThe min and max functions will tell you the price ranges
eg select max(pricecolumn) from tablename where date='20040517'
select min(pricecolumn) from tablename where date='20040517'
If you want to be more selective look at the date/time setting you are using in your data and tailor the where command to select at that particular time
eg where date='2003-02-28 10:00:00.000'
Look at books online for the log function, and use selective where clauses for the times, ie where date='2004-05-17 10:00:00.000'|||select max(pricecolumn) from tablename where date='20040517'
broup by [stocks name]
Re Any ideas on this one ?
to get the timedate field to show without the time.
If your RS parameter is set to a string, the original
> "convert(datetime,period,105)" should give you the listing you desire.
When
> this parameter is then passed to SQL it "should" automatically be
recognized
> as a date, but if not, you could pass the parameter as as string, and
then
> declare and set a new SQL parameter to the cast(@.param as datetime).
This doesn't seem to work, if I set it to string and do the convert it
still
shows the time.
I'm trying to do it the other way around but get syntax error, what could
be
wrong here:
CREATE FUNCTION udf_MyDate (@.indate datetime, @.separator char(1))
RETURNS Nchar(20)
AS
BEGIN
RETURN
CONVERT(Nvarchar(20), datepart(mm,@.indate))
+ @.separator
+ CONVERT(Nvarchar(20), datepart(dd, @.indate))
+ @.separator
+ CONVERT(Nvarchar(20), datepart(yy, @.indate))
END
GO
SELECT DISTINCT PERIODESTART, DAY(PERIODESTART) AS Expr1,
[dbo].[udf_MyDate]
(periodestart,'/') AS pstart
FROM DEBSTAT
WHERE (DAY(PERIODESTART) <> '31')
ORDER BY PERIODESTART
DROP FUNCTION [dbo].[udf_MyDate]
Jack
--
Jeg beskyttes af den gratis SPAMfighter til privatbrugere.
Den har indtil videre sparet mig for at få 44893 spam-mails.
Betalende brugere får ikke denne besked i deres e-mails.
Hent gratis SPAMfighter her: www.spamfighter.dkRight mouse click on the field, properties. Select the date format you want
(assuming I correctly understand what you are looking for. It sounds like
you are concerned about displaying it versus sending a parameter to a query
without a time.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
news:e41LODJjFHA.3472@.TK2MSFTNGP10.phx.gbl...
> Still have some problems with the last thing on my first report ! Not able
> to get the timedate field to show without the time.
> If your RS parameter is set to a string, the original
>> "convert(datetime,period,105)" should give you the listing you desire.
> When
>> this parameter is then passed to SQL it "should" automatically be
> recognized
>> as a date, but if not, you could pass the parameter as as string, and
> then
>> declare and set a new SQL parameter to the cast(@.param as datetime).
> This doesn't seem to work, if I set it to string and do the convert it
> still
> shows the time.
> I'm trying to do it the other way around but get syntax error, what could
> be
> wrong here:
> CREATE FUNCTION udf_MyDate (@.indate datetime, @.separator char(1))
> RETURNS Nchar(20)
> AS
> BEGIN
> RETURN
> CONVERT(Nvarchar(20), datepart(mm,@.indate))
> + @.separator
> + CONVERT(Nvarchar(20), datepart(dd, @.indate))
> + @.separator
> + CONVERT(Nvarchar(20), datepart(yy, @.indate))
> END
> GO
> SELECT DISTINCT PERIODESTART, DAY(PERIODESTART) AS Expr1,
> [dbo].[udf_MyDate]
> (periodestart,'/') AS pstart
> FROM DEBSTAT
> WHERE (DAY(PERIODESTART) <> '31')
> ORDER BY PERIODESTART
> DROP FUNCTION [dbo].[udf_MyDate]
> Jack
>
>
> --
> Jeg beskyttes af den gratis SPAMfighter til privatbrugere.
> Den har indtil videre sparet mig for at få 44893 spam-mails.
> Betalende brugere får ikke denne besked i deres e-mails.
> Hent gratis SPAMfighter her: www.spamfighter.dk
>|||It's a dataset containing timedate fields, used as a parameter list, in
the
list I can only choose timedate not how to show it, tried a couple of
things
but it doesn't seem to work out as planned.
It shows up like this 05/08/05 00:00:00 and I don't need the time only the
date, Chris has tried to help me out with a userdef. function but I just
can't get it right, see the statement below.
Jack
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> skrev i en meddelelse
news:eMNLe1JjFHA.2472@.TK2MSFTNGP15.phx.gbl...
> Right mouse click on the field, properties. Select the date format you
> want (assuming I correctly understand what you are looking for. It
sounds
> like you are concerned about displaying it versus sending a parameter to
a
> query without a time.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
>
> "Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
> news:e41LODJjFHA.3472@.TK2MSFTNGP10.phx.gbl...
>> Still have some problems with the last thing on my first report ! Not
>> able
>> to get the timedate field to show without the time.
>> If your RS parameter is set to a string, the original
>> "convert(datetime,period,105)" should give you the listing you desire.
>> When
>> this parameter is then passed to SQL it "should" automatically be
>> recognized
>> as a date, but if not, you could pass the parameter as as string, and
>> then
>> declare and set a new SQL parameter to the cast(@.param as datetime).
>> This doesn't seem to work, if I set it to string and do the convert it
>> still
>> shows the time.
>> I'm trying to do it the other way around but get syntax error, what
could
>> be
>> wrong here:
>> CREATE FUNCTION udf_MyDate (@.indate datetime, @.separator char(1))
>> RETURNS Nchar(20)
>> AS
>> BEGIN
>> RETURN
>> CONVERT(Nvarchar(20), datepart(mm,@.indate))
>> + @.separator
>> + CONVERT(Nvarchar(20), datepart(dd, @.indate))
>> + @.separator
>> + CONVERT(Nvarchar(20), datepart(yy, @.indate))
>> END
>> GO
>> SELECT DISTINCT PERIODESTART, DAY(PERIODESTART) AS Expr1,
>> [dbo].[udf_MyDate]
>> (periodestart,'/') AS pstart
>> FROM DEBSTAT
>> WHERE (DAY(PERIODESTART) <> '31')
>> ORDER BY PERIODESTART
>> DROP FUNCTION [dbo].[udf_MyDate]
>> Jack
Jeg beskyttes af den gratis SPAMfighter til privatbrugere.
Den har indtil videre sparet mig for at få 44893 spam-mails.
Betalende brugere får ikke denne besked i deres e-mails.
Hent gratis SPAMfighter her: www.spamfighter.dk|||Here is the issue. If you have the data type of the report parameter as a
datetime you have no choice, it will show the time. If you don't want to
show the time then you need to have it as string datatype for the report
parameter parameter.
select convert(varchar(10),getdate(), 101) as Param
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
news:%23kYqIZKjFHA.320@.TK2MSFTNGP09.phx.gbl...
> It's a dataset containing timedate fields, used as a parameter list, in
> the
> list I can only choose timedate not how to show it, tried a couple of
> things
> but it doesn't seem to work out as planned.
> It shows up like this 05/08/05 00:00:00 and I don't need the time only the
> date, Chris has tried to help me out with a userdef. function but I just
> can't get it right, see the statement below.
> Jack
>
> "Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> skrev i en meddelelse
> news:eMNLe1JjFHA.2472@.TK2MSFTNGP15.phx.gbl...
>> Right mouse click on the field, properties. Select the date format you
>> want (assuming I correctly understand what you are looking for. It
> sounds
>> like you are concerned about displaying it versus sending a parameter to
> a
>> query without a time.
>>
>> --
>> Bruce Loehle-Conger
>> MVP SQL Server Reporting Services
>>
>> "Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
>> news:e41LODJjFHA.3472@.TK2MSFTNGP10.phx.gbl...
>> Still have some problems with the last thing on my first report ! Not
>> able
>> to get the timedate field to show without the time.
>> If your RS parameter is set to a string, the original
>> "convert(datetime,period,105)" should give you the listing you desire.
>> When
>> this parameter is then passed to SQL it "should" automatically be
>> recognized
>> as a date, but if not, you could pass the parameter as as string, and
>> then
>> declare and set a new SQL parameter to the cast(@.param as datetime).
>> This doesn't seem to work, if I set it to string and do the convert it
>> still
>> shows the time.
>> I'm trying to do it the other way around but get syntax error, what
> could
>> be
>> wrong here:
>> CREATE FUNCTION udf_MyDate (@.indate datetime, @.separator char(1))
>> RETURNS Nchar(20)
>> AS
>> BEGIN
>> RETURN
>> CONVERT(Nvarchar(20), datepart(mm,@.indate))
>> + @.separator
>> + CONVERT(Nvarchar(20), datepart(dd, @.indate))
>> + @.separator
>> + CONVERT(Nvarchar(20), datepart(yy, @.indate))
>> END
>> GO
>> SELECT DISTINCT PERIODESTART, DAY(PERIODESTART) AS Expr1,
>> [dbo].[udf_MyDate]
>> (periodestart,'/') AS pstart
>> FROM DEBSTAT
>> WHERE (DAY(PERIODESTART) <> '31')
>> ORDER BY PERIODESTART
>> DROP FUNCTION [dbo].[udf_MyDate]
>> Jack
>
> --
> Jeg beskyttes af den gratis SPAMfighter til privatbrugere.
> Den har indtil videre sparet mig for at få 44893 spam-mails.
> Betalende brugere får ikke denne besked i deres e-mails.
> Hent gratis SPAMfighter her: www.spamfighter.dk
>|||This is great, it works !
Thanks to all for your help, really appreciate it !
Jack
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> skrev i en meddelelse
news:OezEyjKjFHA.4000@.TK2MSFTNGP12.phx.gbl...
> Here is the issue. If you have the data type of the report parameter as a
> datetime you have no choice, it will show the time. If you don't want to
> show the time then you need to have it as string datatype for the report
> parameter parameter.
> select convert(varchar(10),getdate(), 101) as Param
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
> news:%23kYqIZKjFHA.320@.TK2MSFTNGP09.phx.gbl...
> > It's a dataset containing timedate fields, used as a parameter list, in
> > the
> > list I can only choose timedate not how to show it, tried a couple of
> > things
> > but it doesn't seem to work out as planned.
> >
> > It shows up like this 05/08/05 00:00:00 and I don't need the time only
the
> > date, Chris has tried to help me out with a userdef. function but I just
> > can't get it right, see the statement below.
> >
> > Jack
> >
> >
> > "Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> skrev i en meddelelse
> > news:eMNLe1JjFHA.2472@.TK2MSFTNGP15.phx.gbl...
> >> Right mouse click on the field, properties. Select the date format you
> >> want (assuming I correctly understand what you are looking for. It
> > sounds
> >> like you are concerned about displaying it versus sending a parameter
to
> > a
> >> query without a time.
> >>
> >>
> >> --
> >> Bruce Loehle-Conger
> >> MVP SQL Server Reporting Services
> >>
> >>
> >> "Jack Nielsen" <no_spam jack.nielsen@.get2net.dk> wrote in message
> >> news:e41LODJjFHA.3472@.TK2MSFTNGP10.phx.gbl...
> >> Still have some problems with the last thing on my first report ! Not
> >> able
> >> to get the timedate field to show without the time.
> >>
> >> If your RS parameter is set to a string, the original
> >> "convert(datetime,period,105)" should give you the listing you
desire.
> >> When
> >> this parameter is then passed to SQL it "should" automatically be
> >> recognized
> >> as a date, but if not, you could pass the parameter as as string, and
> >> then
> >> declare and set a new SQL parameter to the cast(@.param as datetime).
> >>
> >> This doesn't seem to work, if I set it to string and do the convert it
> >> still
> >> shows the time.
> >>
> >> I'm trying to do it the other way around but get syntax error, what
> > could
> >> be
> >> wrong here:
> >>
> >> CREATE FUNCTION udf_MyDate (@.indate datetime, @.separator char(1))
> >> RETURNS Nchar(20)
> >> AS
> >> BEGIN
> >> RETURN
> >> CONVERT(Nvarchar(20), datepart(mm,@.indate))
> >> + @.separator
> >> + CONVERT(Nvarchar(20), datepart(dd, @.indate))
> >> + @.separator
> >> + CONVERT(Nvarchar(20), datepart(yy, @.indate))
> >> END
> >> GO
> >>
> >> SELECT DISTINCT PERIODESTART, DAY(PERIODESTART) AS Expr1,
> >> [dbo].[udf_MyDate]
> >> (periodestart,'/') AS pstart
> >> FROM DEBSTAT
> >> WHERE (DAY(PERIODESTART) <> '31')
> >> ORDER BY PERIODESTART
> >>
> >> DROP FUNCTION [dbo].[udf_MyDate]
> >>
> >> Jack
> >
> >
> >
> > --
> > Jeg beskyttes af den gratis SPAMfighter til privatbrugere.
> > Den har indtil videre sparet mig for at få 44893 spam-mails.
> > Betalende brugere får ikke denne besked i deres e-mails.
> > Hent gratis SPAMfighter her: www.spamfighter.dk
> >
> >
>
Friday, March 9, 2012
rda.SubmitSql cause outofmemoryexception
My log submitter is an instance object. I also tried static object but it doesn't help either.
I do dispose my rda object when it is out.
Is any way I can fix this?Could you provide som sample code - I do not understand the connection between the log files and submitsql - are you submitting the log file text in an insert statement?|||
Sure.
public LogSubmitter()
{
//some local objects.
public void SubmitLog(string logName, SqlCeRemoteDataAccess rda)
{
try
{
//
//read text from log file and append it to StringBuilder: logBuilder
//
string logCmd = "EXEC InsertLog "+"'"+logBuilder.ToString()+"'";
rda.SubmitSql(logCmd ,remoteConnStr);
}
catch(Exception ex)
{
Log.WriteException(ex);
}
}
}
where, InsertLog is a stored procedure on the server.
Is there is problem with this code?
PS: I only submit log when device is under WiFi coverage. rda object is disposed in the caller.
|||Have a close look at your StringBuilder - have you set a useful initial capacity - otherwise you should try that.|||
Thank you very much.
You are right, I didn't set initial capacity. Another reason could be I submit 6 log files continously, each log about takes 80k (max).
But one thing I cannot understand is that I called logBuilder.Remove(0,logBuilder) before submitting each log, so each stringbuilder should be empty before appending new log.
|||Did we solve your outofmemory exeption problems, then?|||Just got log from my test scanner.
It still has this exception. I will do more changes with "capacity" property.
By the way, does StringBuilder has maximum capacity?
Cheers
|||It does, but the MacCapacity property is not available in NETCF, (according to docs) so who knows what it may be?|||
Thanks Erik.
Is StringBuilder's memory reused if we remove contents from it and then append string again?
|||Yes, ErikJ, I resolved this issue by checking the length before appending new chars.
The StringBuilder throws "OutofMemoryException" when it's length is greater than 2359294.
Thanks.
|||Hi, ErikEJ;
Now I have this exception again. The "Length" property is 57341 when this happens.
And I call "Remove" method after I finish submitting a log. It looks like GC doesn't collect those memory.
Thanks.
rda.SubmitSql cause outofmemoryexception
My log submitter is an instance object. I also tried static object but it doesn't help either.
I do dispose my rda object when it is out.
Is any way I can fix this?Could you provide som sample code - I do not understand the connection between the log files and submitsql - are you submitting the log file text in an insert statement?|||
Sure.
public LogSubmitter()
{
//some local objects.
public void SubmitLog(string logName, SqlCeRemoteDataAccess rda)
{
try
{
//
//read text from log file and append it to StringBuilder: logBuilder
//
string logCmd = "EXEC InsertLog "+"'"+logBuilder.ToString()+"'";
rda.SubmitSql(logCmd ,remoteConnStr);
}
catch(Exception ex)
{
Log.WriteException(ex);
}
}
}
where, InsertLog is a stored procedure on the server.
Is there is problem with this code?
PS: I only submit log when device is under WiFi coverage. rda object is disposed in the caller.
|||Have a close look at your StringBuilder - have you set a useful initial capacity - otherwise you should try that.|||Thank you very much.
You are right, I didn't set initial capacity. Another reason could be I submit 6 log files continously, each log about takes 80k (max).
But one thing I cannot understand is that I called logBuilder.Remove(0,logBuilder) before submitting each log, so each stringbuilder should be empty before appending new log.
|||Did we solve your outofmemory exeption problems, then?|||Just got log from my test scanner.
It still has this exception. I will do more changes with "capacity" property.
By the way, does StringBuilder has maximum capacity?
Cheers
|||It does, but the MacCapacity property is not available in NETCF, (according to docs) so who knows what it may be?|||Thanks Erik.
Is StringBuilder's memory reused if we remove contents from it and then append string again?
|||Yes, ErikJ, I resolved this issue by checking the length before appending new chars.
The StringBuilder throws "OutofMemoryException" when it's length is greater than 2359294.
Thanks.
|||Hi, ErikEJ;
Now I have this exception again. The "Length" property is 57341 when this happens.
And I call "Remove" method after I finish submitting a log. It looks like GC doesn't collect those memory.
Thanks.
rda.SubmitSql cause outofmemoryexception
My log submitter is an instance object. I also tried static object but it doesn't help either.
I do dispose my rda object when it is out.
Is any way I can fix this?Could you provide som sample code - I do not understand the connection between the log files and submitsql - are you submitting the log file text in an insert statement?|||
Sure.
public LogSubmitter()
{
//some local objects.
public void SubmitLog(string logName, SqlCeRemoteDataAccess rda)
{
try
{
//
//read text from log file and append it to StringBuilder: logBuilder
//
string logCmd = "EXEC InsertLog "+"'"+logBuilder.ToString()+"'";
rda.SubmitSql(logCmd ,remoteConnStr);
}
catch(Exception ex)
{
Log.WriteException(ex);
}
}
}
where, InsertLog is a stored procedure on the server.
Is there is problem with this code?
PS: I only submit log when device is under WiFi coverage. rda object is disposed in the caller.
|||Have a close look at your StringBuilder - have you set a useful initial capacity - otherwise you should try that.|||Thank you very much.
You are right, I didn't set initial capacity. Another reason could be I submit 6 log files continously, each log about takes 80k (max).
But one thing I cannot understand is that I called logBuilder.Remove(0,logBuilder) before submitting each log, so each stringbuilder should be empty before appending new log.
|||Did we solve your outofmemory exeption problems, then?|||Just got log from my test scanner.
It still has this exception. I will do more changes with "capacity" property.
By the way, does StringBuilder has maximum capacity?
Cheers
|||It does, but the MacCapacity property is not available in NETCF, (according to docs) so who knows what it may be?|||Thanks Erik.
Is StringBuilder's memory reused if we remove contents from it and then append string again?
|||Yes, ErikJ, I resolved this issue by checking the length before appending new chars.
The StringBuilder throws "OutofMemoryException" when it's length is greater than 2359294.
Thanks.
|||Hi, ErikEJ;
Now I have this exception again. The "Length" property is 57341 when this happens.
And I call "Remove" method after I finish submitting a log. It looks like GC doesn't collect those memory.
Thanks.
Wednesday, March 7, 2012
rda.SubmitSql cause outofmemoryexception
My log submitter is an instance object. I also tried static object but it doesn't help either.
I do dispose my rda object when it is out.
Is any way I can fix this?Could you provide som sample code - I do not understand the connection between the log files and submitsql - are you submitting the log file text in an insert statement?|||
Sure.
public LogSubmitter()
{
//some local objects.
public void SubmitLog(string logName, SqlCeRemoteDataAccess rda)
{
try
{
//
//read text from log file and append it to StringBuilder: logBuilder
//
string logCmd = "EXEC InsertLog "+"'"+logBuilder.ToString()+"'";
rda.SubmitSql(logCmd ,remoteConnStr);
}
catch(Exception ex)
{
Log.WriteException(ex);
}
}
}
where, InsertLog is a stored procedure on the server.
Is there is problem with this code?
PS: I only submit log when device is under WiFi coverage. rda object is disposed in the caller.
|||Have a close look at your StringBuilder - have you set a useful initial capacity - otherwise you should try that.|||Thank you very much.
You are right, I didn't set initial capacity. Another reason could be I submit 6 log files continously, each log about takes 80k (max).
But one thing I cannot understand is that I called logBuilder.Remove(0,logBuilder) before submitting each log, so each stringbuilder should be empty before appending new log.
|||Did we solve your outofmemory exeption problems, then?|||Just got log from my test scanner.
It still has this exception. I will do more changes with "capacity" property.
By the way, does StringBuilder has maximum capacity?
Cheers
|||It does, but the MacCapacity property is not available in NETCF, (according to docs) so who knows what it may be?|||Thanks Erik.
Is StringBuilder's memory reused if we remove contents from it and then append string again?
|||Yes, ErikJ, I resolved this issue by checking the length before appending new chars.
The StringBuilder throws "OutofMemoryException" when it's length is greater than 2359294.
Thanks.
|||Hi, ErikEJ;
Now I have this exception again. The "Length" property is 57341 when this happens.
And I call "Remove" method after I finish submitting a log. It looks like GC doesn't collect those memory.
Thanks.
rda.SubmitSql cause outofmemoryexception
My log submitter is an instance object. I also tried static object but it doesn't help either.
I do dispose my rda object when it is out.
Is any way I can fix this?Could you provide som sample code - I do not understand the connection between the log files and submitsql - are you submitting the log file text in an insert statement?|||
Sure.
public LogSubmitter()
{
//some local objects.
public void SubmitLog(string logName, SqlCeRemoteDataAccess rda)
{
try
{
//
//read text from log file and append it to StringBuilder: logBuilder
//
string logCmd = "EXEC InsertLog "+"'"+logBuilder.ToString()+"'";
rda.SubmitSql(logCmd ,remoteConnStr);
}
catch(Exception ex)
{
Log.WriteException(ex);
}
}
}
where, InsertLog is a stored procedure on the server.
Is there is problem with this code?
PS: I only submit log when device is under WiFi coverage. rda object is disposed in the caller.
|||Have a close look at your StringBuilder - have you set a useful initial capacity - otherwise you should try that.|||
Thank you very much.
You are right, I didn't set initial capacity. Another reason could be I submit 6 log files continously, each log about takes 80k (max).
But one thing I cannot understand is that I called logBuilder.Remove(0,logBuilder) before submitting each log, so each stringbuilder should be empty before appending new log.
|||Did we solve your outofmemory exeption problems, then?|||Just got log from my test scanner.
It still has this exception. I will do more changes with "capacity" property.
By the way, does StringBuilder has maximum capacity?
Cheers
|||It does, but the MacCapacity property is not available in NETCF, (according to docs) so who knows what it may be?|||
Thanks Erik.
Is StringBuilder's memory reused if we remove contents from it and then append string again?
|||Yes, ErikJ, I resolved this issue by checking the length before appending new chars.
The StringBuilder throws "OutofMemoryException" when it's length is greater than 2359294.
Thanks.
|||Hi, ErikEJ;
Now I have this exception again. The "Length" property is 57341 when this happens.
And I call "Remove" method after I finish submitting a log. It looks like GC doesn't collect those memory.
Thanks.
Saturday, February 25, 2012
RDA problem
I am attempting to execute an RDA.pull from my master database and it is generating exceptions the second time I attempt the pull.
NOTE: I always drop the table prior to executing the pull.
It seems that after the first time I execute a pull against a particular table name on the PDA I get the following:
"A duplicate value cannot be inserted into a unique index. [ Table name = __sysRDASubscriptions,Constraint name = c_LocalTableName ]"
Given that the table doesn't exist when I do the pull (I have already dropped it, and verified it's non-existence via the Query Analyzer), why does this error appear? It's almost as though some artifact of the table still exists in the background somewhere and it doesn't like responding to a pull...
Anyone see this before?
This only seems to manifest when I choose one of the trackingOn options--when tracking is off, this doesn't happen.|||If you call CompactDatabase after dropping the table, does that resolve the error ? This is just a troubleshooting step to try.
Thank you!
Syed N. Yousuf
Microsoft Developer Support Professional
This posting is provided “AS IS” with no warranties, and confers no rights.
|||Hmm..Drop all these
1) The table you have pulled
2) The error table (if any you chose)
3) The PK Indexes of the pulled table (if you pulled with IndexesOn)
4) The Constraints for that table
Thanks,
Laxmi Narsimha Rao ORUGANTI, MSFT, SQL Mobile, Microsoft Corporation
|||Good idea Syed, but no luck. It didn't seem to make any difference if I compacted the database and then pulled into the new db created.
-Kevin
|||Thanks for your suggestions Laxmi
I'm having trouble with this. How can I drop the constraints and PK Indexes if I don't know what they're called? It would seem that SQL Server Mobile has assigned generated names to these, and I can't figure out how to drop them.
Is this even possible?
-Kevin
|||To know the indexes and their information such as on what table and what columns they are created use this query:
SELECT * FROM INFORMATION_SCHEMA.INDEXES;
Similarly for constraints,
SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS;
SELECT * FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS
Hope this helps!
Thanks,
Laxmi Narsimha Rao ORUGANTI, MSFT, SQL Mobile, Microsoft Corporation
|||Hi Laxmi
Thanks for the info. This allows me to see the constraints and indexes, but Sql Mobile doesn't seem to keen on giving them up. When I try to delete my PK constraint (using ALTER TABLE xxx DROP CONSTRAINT) I get a message saying "DDL Operations on this table are restricted".
One thing I found that works is to simply delete the database file and recreate it prior to calling rda.pull. This would seem to be the simplest--are there any problems you see in this approach?
-Kevin
|||No problem, only if there is no other table in that database. Please make sure that you transfer the data from old database to new database in this approach.
Thanks,
Laxmi Narsimha Rao ORUGANTI, MSFT, SQL Mobile, Microsoft Corporation
Monday, February 20, 2012
RB - Multiple Data Source for Report Builder
I add two data sources to the designer. Create a new dsv using one data source first. Then right click in the designer to add a new table. This time I use the second data source. When I create a Report Model and run it, an error occurred: "Message: Invalid object name 'dbo.tblTrade'. Command: SELECT COUNT(*) FROM [dbo].[tblTrade] t"
It does not recognize the second data source. Look at the property of the dsv, it only points to the first data source, not the other one...
If you look at the XML code for the dsv, there is a DataSourceID tag right above the Schema tag. Can this tag be expanded to include the 2nd data source? Can the XML code be tweaked to include the second data source?</Annotations>
<DataSourceID>Db House01</DataSourceID>
<Schema>
|||"Our newly created Data Source is positioned as the default, and will serve us in meeting the objectives of our practice exercise. A Data Source View for a Report Model Project, unlike a Data Source View for an Analysis Services Project, can only reference a single Data Source. "
Is this true?
http://www.databasejournal.com/features/mssql/article.php/10894_3598931_4
|||Yes, that is correct.
|||By creating a named query can overcome the single data source limitation.|||
This solution does not work. If one try to create model bases on such Data source View, table entity for the named query defined in DSV, which is not bases on prime data source, is not being created.
Can you please give some steps to over come this problem?
|||MSDN Online Book link below:
http://msdn2.microsoft.com/en-us/library/ms175683.aspx