1. Install the Java Runtime environment or Install the JDK from www.java.com
2. Set the PATH environment variable for Java compiler and interpreter as
PATH=.:$PATH:"/bin"
export PATH
3. Download the jconnect(Sybase Driver) from Sybase and untar/unzip the package from http://www.sybase.com/products/middleware/jconnectforjdbc
4. You need to define two env variable CLASSPATH & JDBC_HOME
i)Set the JDBC_HOME as
[sybase@localhost jConnect-6_0]$ JDBC_HOME="/home/sybase/15.5/jConnect-6_0"
[sybase@localhost jConnect-6_0]$ export JDBC_HOME
[sybase@localhost jConnect-6_0]$ echo $JDBC_HOME
/home/sybase/15.5/jConnect-6_0
ii) Set the CLASSPATH as
[sybase@localhost jConnect-6_0]$CLASSPATH="$JDBC_HOME/classes":"/home/sybase/15.5/jConnect-6_0/classes/jconn3.jar ":.
[sybase@localhost jConnect-6_0]$ export CLASSPATH
[sybase@localhost jConnect-6_0]$ echo $CLASSPATH
.:/home/sybase/15.5/jConnect-6_0/classes/jconn3.jar:/home/sybase/15.5/jConnect-6_0/classes
5. Install the Jconnect stored procs and tables :
For jConnect to function properly, you must install stored procedures and tables on the Adaptive Server Enterprise or ASA.
Go the $JDBC_HOME/classes
java IsqlApp -U sa -P -S jdbc:sybase:Tds:localhost.localdomain:5000 -I $JDBC_HOME/sp/sql_server15.0.sql -c go
6. If installation is successful, you need to test the installation
For testing you can check using following Applet : run from $JDBC_HOME
java sample2.SybSample Version
If everything is fine, you can get the jconnect version in output window.
7. Testing Java connectivity with Sybase ASE
[sybase@localhost ~]$ cat JdbcTest.java
import java.sql.* ;
public class JdbcTest
{
public static void main( String args[] )
{
try
{
Class.forName( "com.sybase.jdbc3.jdbc.SybDriver" ) ;
Connection conn = DriverManager.getConnection( "jdbc:sybase:Tds:localhost.localdomain:5000", "sa", "") ;
Statement stmt = conn.createStatement() ;
ResultSet rs = stmt.executeQuery( "SELECT * FROM master..sysdatabases" ) ;
while( rs.next() )
System.out.println( rs.getString(1) ) ;
rs.close() ;
stmt.close() ;
conn.close() ;
}
catch( Exception e )
{ System.out.println( e ) ; }
}
}
i)Compile the code
[sybase@localhost ~]$ javac JdbcTest.java
ii)Test your code
[sybase@localhost ~]$ java JdbcTest
master
model
tempdb
sybsystemdb
sybsystemprocs
pubs2
sybsecurity
Sybase Blog -Anything About Sybase ASE,REP,IQ.
Showing posts with label Interview Ques. Show all posts
Showing posts with label Interview Ques. Show all posts
Thursday, March 24, 2011
Friday, March 11, 2011
ASE Archive Databases for Compressed Backup.
Observation:
1. For compressed backup , only works with option : with compression=""
2. If we have taken the backup with with compression="" option, we no need to specify the the compression level during load database.
3. We can perform the dbcc on recent dump files of the databases, without hampering the performance of the prodduction database.
4. We can also object level recovery using the Archive Database.
5. For mapping the dump stipes , it makes sysaltusages table in scratch database.
Testing:
1> dump database test_db to "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2" with compression="4"
3> go
Backup Server session id is: 131. Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any
volume change request from the Backup Server.
Backup Server: 4.41.1.1: Creating new disk file /data/sybase/testing_host/dump/test_db.dmp1.
Backup Server: 4.41.1.1: Creating new disk file /data/sybase/testing_host/dump/test_db.dmp2.
Backup Server: 6.28.1.1: Dumpfile name 'sr_load1106806A97' section number 1 mounted on disk file
'/data/sybase/testing_host/dump/test_db.dmp2'
Backup Server: 6.28.1.1: Dumpfile name 'sr_load1106806A97' section number 1 mounted on disk file
'/data/sybase/testing_host/dump/test_db.dmp1'
Backup Server: 4.188.1.1: Database test_db: 11960 kilobytes (4%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 29094 kilobytes (6%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 315648 kilobytes (100%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 323870 kilobytes (100%) DUMPED.
Backup Server: 3.43.1.1: Dump phase number 1 completed.
Backup Server: 3.43.1.1: Dump phase number 2 completed.
Backup Server: 3.43.1.1: Dump phase number 3 completed.
Backup Server: 4.188.1.1: Database test_db: 323884 kilobytes (100%) DUMPED.
Backup Server: 3.42.1.1: DUMP is complete (database test_db).
1> !!ls -ltr /data/sybase/testing_host/dump/test_db.dmp*
[sh:ls -ltr /data/sybase/testing_host/dump/test_db.dmp*]
-rw-r----- 1 sybase dba 40783872 Mar 9 07:35 /data/sybase/testing_host/dump/test_db.dmp1
-rw-r----- 1 sybase dba 41261056 Mar 9 07:35 /data/sybase/testing_host/dump/test_db.dmp2
1> create database scratch on data16='100M' log on log01='100M'
2> go
CREATE DATABASE: allocating 51200 logical pages (100.0 megabytes) on disk 'data16'.
CREATE DATABASE: allocating 51200 logical pages (100.0 megabytes) on disk 'log01
Database 'scratch' is now online.
1> sp_dboption 'scratch',"scratch database", "true"
2> go
Database option 'scratch database' turned ON for database 'scratch'.
Running CHECKPOINT on database 'scratch' for option 'scratch database' to take e
(return status = 0)
1> use scratch
2> go
1> checkpoint
2> go
1> use master
2> go
1> sp_configure 'compression memory size'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit
Type
------------------------------ -------------------- ----------- -------------------- -------------------- --------------------
----------
compression memory size 0 152 0 0 memory pages(2k)
dynamic
(1 row affected)
(return status = 0)
1> sp_configure 'compression memory size',64
2> go
Parameter Name Default Memory Used Config Value Run Value Unit
Type
------------------------------ -------------------- ----------- -------------------- -------------------- --------------------
----------
compression memory size 0 280 64 64 memory pages(2k)
dynamic
(1 row affected)
1> create archive database archivedb
2> on data15='50M'
3> with scratch_database = scratch
4> go
CREATE DATABASE: allocating 25600 logical pages (50.0 megabytes) on disk 'data15'.
1> load database archivedb from "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2"
3> go
1> load database archivedb from "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2"
3> go
Started estimating recovery log boundaries for database 'archivedb'.
Database 'archivedb', checkpoint=(303154, 21), first=(303154, 21), last=(303155, 6).
Completed estimating recovery log boundaries for database 'archivedb'.
Started ANALYSIS pass for database 'archivedb'.
Completed ANALYSIS pass for database 'archivedb'.
Started REDO pass for database 'archivedb'. The total number of log records to process is 11.
Redo pass of recovery has processed 2 committed and 0 aborted transactions.
Completed REDO pass for database 'archivedb'.
Use the ONLINE DATABASE command to bring this database online; ASE will not bring it online automatically.
1> online database archivedb
2> go
Started estimating recovery log boundaries for database 'archivedb'.
Database 'archivedb', checkpoint=(303154, 21), first=(303154, 21), last=(303155, 6).
Completed estimating recovery log boundaries for database 'archivedb'.
Started ANALYSIS pass for database 'archivedb'.
Completed ANALYSIS pass for database 'archivedb'.
Recovery of database 'archivedb' will undo incomplete nested top actions.
Database 'archivedb' is now online.
1> use archivedb
2> go
1> set rowcount 5
2> go
1> select name from sysobjects where type='U'
2> go
name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
testing_FPOSDailyFunding
testing_IDeltaContracts
testing_RMSRates
testing_ESSPLFXRpt
testing_ESSAlgo
(5 rows affected)
1> set rowcount 0
2> go
1> select * from sysaltusages where 1=2
2> go
dbid location lstart size vstart vdevno segmap
------ ----------- ----------- ----------- ----------- ----------- -----------
(0 rows affected)
1> set rowcount 5
2> go
1> select * from sysaltusages
2> go
dbid location lstart size vstart vdevno segmap
------ ----------- ----------- ----------- ----------- ----------- -----------
8 4 0 256000 1 24 3
8 4 256000 51200 1 24 4
8 4 307200 256000 1 24 3
8 5 0 32 42 26 3
8 5 32 32 51 26 3
(5 rows affected)
1. For compressed backup , only works with option : with compression=""
2. If we have taken the backup with with compression="" option, we no need to specify the the compression level during load database.
3. We can perform the dbcc on recent dump files of the databases, without hampering the performance of the prodduction database.
4. We can also object level recovery using the Archive Database.
5. For mapping the dump stipes , it makes sysaltusages table in scratch database.
Testing:
1> dump database test_db to "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2" with compression="4"
3> go
Backup Server session id is: 131. Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any
volume change request from the Backup Server.
Backup Server: 4.41.1.1: Creating new disk file /data/sybase/testing_host/dump/test_db.dmp1.
Backup Server: 4.41.1.1: Creating new disk file /data/sybase/testing_host/dump/test_db.dmp2.
Backup Server: 6.28.1.1: Dumpfile name 'sr_load1106806A97' section number 1 mounted on disk file
'/data/sybase/testing_host/dump/test_db.dmp2'
Backup Server: 6.28.1.1: Dumpfile name 'sr_load1106806A97' section number 1 mounted on disk file
'/data/sybase/testing_host/dump/test_db.dmp1'
Backup Server: 4.188.1.1: Database test_db: 11960 kilobytes (4%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 29094 kilobytes (6%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 315648 kilobytes (100%) DUMPED.
Backup Server: 4.188.1.1: Database test_db: 323870 kilobytes (100%) DUMPED.
Backup Server: 3.43.1.1: Dump phase number 1 completed.
Backup Server: 3.43.1.1: Dump phase number 2 completed.
Backup Server: 3.43.1.1: Dump phase number 3 completed.
Backup Server: 4.188.1.1: Database test_db: 323884 kilobytes (100%) DUMPED.
Backup Server: 3.42.1.1: DUMP is complete (database test_db).
1> !!ls -ltr /data/sybase/testing_host/dump/test_db.dmp*
[sh:ls -ltr /data/sybase/testing_host/dump/test_db.dmp*]
-rw-r----- 1 sybase dba 40783872 Mar 9 07:35 /data/sybase/testing_host/dump/test_db.dmp1
-rw-r----- 1 sybase dba 41261056 Mar 9 07:35 /data/sybase/testing_host/dump/test_db.dmp2
1> create database scratch on data16='100M' log on log01='100M'
2> go
CREATE DATABASE: allocating 51200 logical pages (100.0 megabytes) on disk 'data16'.
CREATE DATABASE: allocating 51200 logical pages (100.0 megabytes) on disk 'log01
Database 'scratch' is now online.
1> sp_dboption 'scratch',"scratch database", "true"
2> go
Database option 'scratch database' turned ON for database 'scratch'.
Running CHECKPOINT on database 'scratch' for option 'scratch database' to take e
(return status = 0)
1> use scratch
2> go
1> checkpoint
2> go
1> use master
2> go
1> sp_configure 'compression memory size'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit
Type
------------------------------ -------------------- ----------- -------------------- -------------------- --------------------
----------
compression memory size 0 152 0 0 memory pages(2k)
dynamic
(1 row affected)
(return status = 0)
1> sp_configure 'compression memory size',64
2> go
Parameter Name Default Memory Used Config Value Run Value Unit
Type
------------------------------ -------------------- ----------- -------------------- -------------------- --------------------
----------
compression memory size 0 280 64 64 memory pages(2k)
dynamic
(1 row affected)
1> create archive database archivedb
2> on data15='50M'
3> with scratch_database = scratch
4> go
CREATE DATABASE: allocating 25600 logical pages (50.0 megabytes) on disk 'data15'.
1> load database archivedb from "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2"
3> go
1> load database archivedb from "/data/sybase/testing_host/dump/test_db.dmp1"
2> stripe on "/data/sybase/testing_host/dump/test_db.dmp2"
3> go
Started estimating recovery log boundaries for database 'archivedb'.
Database 'archivedb', checkpoint=(303154, 21), first=(303154, 21), last=(303155, 6).
Completed estimating recovery log boundaries for database 'archivedb'.
Started ANALYSIS pass for database 'archivedb'.
Completed ANALYSIS pass for database 'archivedb'.
Started REDO pass for database 'archivedb'. The total number of log records to process is 11.
Redo pass of recovery has processed 2 committed and 0 aborted transactions.
Completed REDO pass for database 'archivedb'.
Use the ONLINE DATABASE command to bring this database online; ASE will not bring it online automatically.
1> online database archivedb
2> go
Started estimating recovery log boundaries for database 'archivedb'.
Database 'archivedb', checkpoint=(303154, 21), first=(303154, 21), last=(303155, 6).
Completed estimating recovery log boundaries for database 'archivedb'.
Started ANALYSIS pass for database 'archivedb'.
Completed ANALYSIS pass for database 'archivedb'.
Recovery of database 'archivedb' will undo incomplete nested top actions.
Database 'archivedb' is now online.
1> use archivedb
2> go
1> set rowcount 5
2> go
1> select name from sysobjects where type='U'
2> go
name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
testing_FPOSDailyFunding
testing_IDeltaContracts
testing_RMSRates
testing_ESSPLFXRpt
testing_ESSAlgo
(5 rows affected)
1> set rowcount 0
2> go
1> select * from sysaltusages where 1=2
2> go
dbid location lstart size vstart vdevno segmap
------ ----------- ----------- ----------- ----------- ----------- -----------
(0 rows affected)
1> set rowcount 5
2> go
1> select * from sysaltusages
2> go
dbid location lstart size vstart vdevno segmap
------ ----------- ----------- ----------- ----------- ----------- -----------
8 4 0 256000 1 24 3
8 4 256000 51200 1 24 4
8 4 307200 256000 1 24 3
8 5 0 32 42 26 3
8 5 32 32 51 26 3
(5 rows affected)
ASE Interview Questions!
What is Identity Colum?
What is the advantage and disadvantage of Identity coloums?
From performnace point of view ,which is better if exists or if not exists?
How can we avoid fragmentation in table?
There is update statement on one APL and one DOL table. Which one would be fatser?Consider the cases: where clause on index cluster index coloum , other case not using any index.
Why the reorg is faster on DOL table as compare cluster index rebuild on APL?
Wht cluster index with sorted_data on APL is faster than reorg rebuild in DOL?
What is Sybase recommendation for tempdb size, suppose we have 300GB , 150GB dbs are inserver, wht would be the sybase recommendation for sizing of tempdb?
Whats the difference between dsysnc and direct io?
Suppose we are not concerning about the recovery of the database, which would be better for performance dsync(on/off) or direct io and why?
Whats the asynchronus prefetch ? How is it helping in performance enhance?
We having a 4k page size server, what can be possible pool size in the server?
As Sybase recommends 4K size pool for log usage in 2k page size server , please let me know the pool recommendtaion for 4K pagesize server?
How can we reduce the spinlock without partioning the data cache?
Can we have the spinlock contention with single engine?
In sysmon report what are the five segment you will be looking for performance?
Whta is meta data cache?
Whta is the archive database?
How can we enable the acrhive database for compresssed backup?
Hows the object level recovery is possible in ASE?
How can we find the culprit spid which has filled up th etempdb database?
How can we find the culprit spid which is badly used the log segment of tempdb?
Whats partioning? How partioning helping in increaeing the performance?
Suppose a table is partioned based on a coloum, how dataserver will be handle the insert on the table?
Apart from the query plans, wht else resides in proc cache?
What is new config param "optimization goal"? Whats the parameter we need to provide it?
User is experiancing very slow performace, what can be the reason for this slowness?
What is engine affinity and how can set the engine affinity?
If there are 3 cpus in the box, how many engine we can configure ?
Suppose dataserver is running very slow and sp_monitor is showing 100% cpu usages, what can be possible issue? Where will you look at?
What is the error classes in replication server?
What is the diffrence between Warm standby and table level replication?
Can you please let me know five case when the thread goes down in replication?
What are triggers? What are type of triggers and how many triggers can we configure on a table?
What are diffrecnt locking scheme in ASE and what are the latches?
How can we dump a replication queue?
What is the advantage and disadvantage of Identity coloums?
From performnace point of view ,which is better if exists or if not exists?
How can we avoid fragmentation in table?
There is update statement on one APL and one DOL table. Which one would be fatser?Consider the cases: where clause on index cluster index coloum , other case not using any index.
Why the reorg is faster on DOL table as compare cluster index rebuild on APL?
Wht cluster index with sorted_data on APL is faster than reorg rebuild in DOL?
What is Sybase recommendation for tempdb size, suppose we have 300GB , 150GB dbs are inserver, wht would be the sybase recommendation for sizing of tempdb?
Whats the difference between dsysnc and direct io?
Suppose we are not concerning about the recovery of the database, which would be better for performance dsync(on/off) or direct io and why?
Whats the asynchronus prefetch ? How is it helping in performance enhance?
We having a 4k page size server, what can be possible pool size in the server?
As Sybase recommends 4K size pool for log usage in 2k page size server , please let me know the pool recommendtaion for 4K pagesize server?
How can we reduce the spinlock without partioning the data cache?
Can we have the spinlock contention with single engine?
In sysmon report what are the five segment you will be looking for performance?
Whta is meta data cache?
Whta is the archive database?
How can we enable the acrhive database for compresssed backup?
Hows the object level recovery is possible in ASE?
How can we find the culprit spid which has filled up th etempdb database?
How can we find the culprit spid which is badly used the log segment of tempdb?
Whats partioning? How partioning helping in increaeing the performance?
Suppose a table is partioned based on a coloum, how dataserver will be handle the insert on the table?
Apart from the query plans, wht else resides in proc cache?
What is new config param "optimization goal"? Whats the parameter we need to provide it?
User is experiancing very slow performace, what can be the reason for this slowness?
What is engine affinity and how can set the engine affinity?
If there are 3 cpus in the box, how many engine we can configure ?
Suppose dataserver is running very slow and sp_monitor is showing 100% cpu usages, what can be possible issue? Where will you look at?
What is the error classes in replication server?
What is the diffrence between Warm standby and table level replication?
Can you please let me know five case when the thread goes down in replication?
What are triggers? What are type of triggers and how many triggers can we configure on a table?
What are diffrecnt locking scheme in ASE and what are the latches?
How can we dump a replication queue?
Wednesday, September 22, 2010
Sybase ASE 12.5 to 15 Upgradation Steps -II
Upgrade Steps:
Before proceeding with any further steps , please make sure preupgrade step is clean and you have backup of all databases, tables etc.
1. Make sure you are in ASe 15 enviorment, ( by executing .profile_15)
2. cd to sample resource files directory.
3. copy the sql server resource file with new name before modifying original file.
4. Edit the resource file for the old sybase home dir (ASE 12.5),for reserver word check yes and update the resource file as per your enviorment.
5. No need to shutdown your ASE 12.5 server, sqlupgradres will take care all the things.
6. Only execute this command once you have done with all above steps and you have taken all backup. (MOST IMPORTANT)
sqlupgraderes again perform preupgrade, before actual upgrade.( you can modify its behaviour in resource file).
sqlupgraderes will copy automatically the interface file, cfg file , run server file in ase 15x.
Execute the following command with resource file: sqlupgraderes -r <resource_file_name>
Copying interfaces file entry to new directory...
The interfaces file entry has been copied to the new directory.
Running preupgrade program...
The preupgrade program has completed successfully.
Checking reserved words in each database...
No reserved word conflicts were found.
Checkpointing all databases...
All databases have been checkpointed.
Copying configuration file to new directory...
The configuration file has been copied to the new directory.
Updating RUN_SERVER file...
The RUN_SERVER file has been updated.
Doing pre-upgrade modifications...
Pre-upgrade modifications succeeded.
Restarting Adaptive Server....
Adaptive Server has been restarted.
Running upgrade program...
The upgrade program has completed successfully.
Restarting Adaptive Server....
Adaptive Server has been restarted.
Running installmaster script...
installmaster: 10% complete.
installmaster: 20% complete.
installmaster: 30% complete.
installmaster: 40% complete.
installmaster: 50% complete.
installmaster: 60% complete.
installmaster: 70% complete.
installmaster: 80% complete.
installmaster: 90% complete.
installmaster: 100% complete.
The installmaster script has been successfully installed.
Running installcommit script...
The installcommit script has been successfully installed.
Installing common character sets (Code Page 437, Code Page 850, ISO Latin-1,
Macintosh and HP Roman-8)...
Character sets installed.
Restarting Adaptive Server....
Adaptive Server has been restarted.
Done
6. Log in the server, check the current version with @@version, @@boottime, database/device status.
Post-upgrade tasks:
1. Installing 15x stored procedures after upgrade depending which function is being used on the server.
installsecurity – Run this script if the Adaptive Server Auditing functionality was enabled in your earlier installation.
installhasvss – Run this script if the HA functionality is enabled and the HA feature is currently in use in the upgraded installation of Adaptive Server.
Installmodel
instmsgs.ebf
Check the installation script version using sp_version by logging in ASE.
2. Compare sp_configure output to see if there is any difference. Verify procedure cache allocation after upgrading. Verify data cache has no change.
3. Make sure license file is used.
save the license files with a .lic file name extension to the SYSAM-2_0/licenses directory of the license server installation.
Modify the variable in .profile for any instances on the same box.
For example:
LM_LICENSE_FILE=servername.lic
If you not having lic file, it will run under grace period of 1 months, you will get same warning in errorlog.
4. Reboot the instance
5. Reenabling auditing if needed.
Wednesday, September 1, 2010
Renaming the Adaptive Server - ASE
Before doing any step ;
i) Please take the backup of master database, including all other important critial databases.
ii) Please take the backup of all files which you are modifying to backout in case of any issue.
iii) Stop the client/app activity on the server.
1. First modify the sysservers table in master database and shutdown the dataserver.
Enable allow update on system table, modify sysservers for srvname and srvnetname with new server.
2. Copy the RUN Server file with new name.
3. Modify the new RUN server file for new server name, errorlog file name(optional), config file name (optional).
4. If you are modifying any cfg file name in new RUN server file, please copy the old cfg file with new name.
5. Take the backup of the interfaces file, after that modify the new servername are just add the new server name with different port number.
6. Now restart your dataserver with new RUN server file.
7. Verify the connection from isql/app team.
Same step we can repeat for the Sybase backup Server.
Thanks!!
i) Please take the backup of master database, including all other important critial databases.
ii) Please take the backup of all files which you are modifying to backout in case of any issue.
iii) Stop the client/app activity on the server.
1. First modify the sysservers table in master database and shutdown the dataserver.
Enable allow update on system table, modify sysservers for srvname and srvnetname with new server.
2. Copy the RUN Server file with new name.
3. Modify the new RUN server file for new server name, errorlog file name(optional), config file name (optional).
4. If you are modifying any cfg file name in new RUN server file, please copy the old cfg file with new name.
5. Take the backup of the interfaces file, after that modify the new servername are just add the new server name with different port number.
6. Now restart your dataserver with new RUN server file.
7. Verify the connection from isql/app team.
Same step we can repeat for the Sybase backup Server.
Thanks!!
Tuesday, August 17, 2010
Replication Server Interview questions updated!!!
Hi Folks,
Just added some more ques for Sybase Replication Server in interviewquestion page @http://sybaseblog.com/interviewquestions/
Happy Learning sybase ...:)
Have Fun....
Cheers!!!!
Just added some more ques for Sybase Replication Server in interviewquestion page @http://sybaseblog.com/interviewquestions/
Happy Learning sybase ...:)
Have Fun....
Cheers!!!!
Subscribe to:
Posts (Atom)