1. Install the Java Runtime environment or Install the JDK from www.java.com
2. Set the PATH environment variable for Java compiler and interpreter as
PATH=.:$PATH:"/bin"
export PATH
3. Download the jconnect(Sybase Driver) from Sybase and untar/unzip the package from http://www.sybase.com/products/middleware/jconnectforjdbc
4. You need to define two env variable CLASSPATH & JDBC_HOME
i)Set the JDBC_HOME as
[sybase@localhost jConnect-6_0]$ JDBC_HOME="/home/sybase/15.5/jConnect-6_0"
[sybase@localhost jConnect-6_0]$ export JDBC_HOME
[sybase@localhost jConnect-6_0]$ echo $JDBC_HOME
/home/sybase/15.5/jConnect-6_0
ii) Set the CLASSPATH as
[sybase@localhost jConnect-6_0]$CLASSPATH="$JDBC_HOME/classes":"/home/sybase/15.5/jConnect-6_0/classes/jconn3.jar ":.
[sybase@localhost jConnect-6_0]$ export CLASSPATH
[sybase@localhost jConnect-6_0]$ echo $CLASSPATH
.:/home/sybase/15.5/jConnect-6_0/classes/jconn3.jar:/home/sybase/15.5/jConnect-6_0/classes
5. Install the Jconnect stored procs and tables :
For jConnect to function properly, you must install stored procedures and tables on the Adaptive Server Enterprise or ASA.
Go the $JDBC_HOME/classes
java IsqlApp -U sa -P -S jdbc:sybase:Tds:localhost.localdomain:5000 -I $JDBC_HOME/sp/sql_server15.0.sql -c go
6. If installation is successful, you need to test the installation
For testing you can check using following Applet : run from $JDBC_HOME
java sample2.SybSample Version
If everything is fine, you can get the jconnect version in output window.
7. Testing Java connectivity with Sybase ASE
[sybase@localhost ~]$ cat JdbcTest.java
import java.sql.* ;
public class JdbcTest
{
public static void main( String args[] )
{
try
{
Class.forName( "com.sybase.jdbc3.jdbc.SybDriver" ) ;
Connection conn = DriverManager.getConnection( "jdbc:sybase:Tds:localhost.localdomain:5000", "sa", "") ;
Statement stmt = conn.createStatement() ;
ResultSet rs = stmt.executeQuery( "SELECT * FROM master..sysdatabases" ) ;
while( rs.next() )
System.out.println( rs.getString(1) ) ;
rs.close() ;
stmt.close() ;
conn.close() ;
}
catch( Exception e )
{ System.out.println( e ) ; }
}
}
i)Compile the code
[sybase@localhost ~]$ javac JdbcTest.java
ii)Test your code
[sybase@localhost ~]$ java JdbcTest
master
model
tempdb
sybsystemdb
sybsystemprocs
pubs2
sybsecurity
Sybase Blog -Anything About Sybase ASE,REP,IQ.
Showing posts with label Developement. Show all posts
Showing posts with label Developement. Show all posts
Thursday, March 24, 2011
Sunday, February 6, 2011
Default,Rules and User Defined Datatype
Question
=======
Consider a table creation script
create table table1(id int not null,
cost smallmoney default $10
)
will it allow null value in column "cost"?
Answer
======
No
Source (Unleashed)
Question
=======
How to find all tables and columns which are bind with a rule/default?
Answer
======
select "table"=o.name, "column"=c.name,"user"=user_name(uid),"rule"=object_name(domain)
from sysobjects o,syscolumns c
where o.id=c.id
and object_name(c.domain)='rule/default name'
Sequence of events when using datatypes
=================================
sp_addtype ssn_type, 'char(9)', 'not null'
go
create rule ssn_rule as
@ssn between '001111' and '1111111'
or @ssn = 'N/A'
go
create default ssn_default as
'N/A'
sp_bindrule ssn_rule,ssn_type
sp_bindefault ssn_default,ssn_type
create table ssn_table
(ssn ssn_type, name varchar(30))
go
==================
A rule or default bound explicitly to a column override a rule or default bound to a datatype. A subsequent bind to the datatype replaces the bind to the column as long as the column and datatype have the same rule or default prior to modification.
=======
Consider a table creation script
create table table1(id int not null,
cost smallmoney default $10
)
will it allow null value in column "cost"?
Answer
======
No
Source (Unleashed)
Question
=======
How to find all tables and columns which are bind with a rule/default?
Answer
======
select "table"=o.name, "column"=c.name,"user"=user_name(uid),"rule"=object_name(domain)
from sysobjects o,syscolumns c
where o.id=c.id
and object_name(c.domain)='rule/default name'
Sequence of events when using datatypes
=================================
sp_addtype ssn_type, 'char(9)', 'not null'
go
create rule ssn_rule as
@ssn between '001111' and '1111111'
or @ssn = 'N/A'
go
create default ssn_default as
'N/A'
sp_bindrule ssn_rule,ssn_type
sp_bindefault ssn_default,ssn_type
create table ssn_table
(ssn ssn_type, name varchar(30))
go
==================
A rule or default bound explicitly to a column override a rule or default bound to a datatype. A subsequent bind to the datatype replaces the bind to the column as long as the column and datatype have the same rule or default prior to modification.
Renaming Objects
(Source - Unleashed)
Question
=======
If name of table is changed then stored procedure and views would work or will change to invalid state?
Answer
======
Even though the name of table has changed, objects like views and procedures that refer to that table by name are not affected by the change in the name. That is because SQL-based objects like Views and Procedures are stored both as text in the syscomments table and pre-parsed query tree identifying related objects by ID instead of Name. When a table name changes, a dependent view still works because the objectID of the table (stored in sysobjects) does not change.
Question
=======
If name of table is changed then stored procedure and views would work or will change to invalid state?
Answer
======
Even though the name of table has changed, objects like views and procedures that refer to that table by name are not affected by the change in the name. That is because SQL-based objects like Views and Procedures are stored both as text in the syscomments table and pre-parsed query tree identifying related objects by ID instead of Name. When a table name changes, a dependent view still works because the objectID of the table (stored in sysobjects) does not change.
Tuesday, February 1, 2011
Performance Tuning – Scenario Based
Suppose we have a join order as below in our query
#tmp > Table1 > Table2>
but query plan would be making join order as below
Table1 > Table2 > #tmp
and it would be causing a whole index scan on both the tables Table1 and Table2 (which is as per the people expectation) before we use temp table to filter out results. However, still we can get performance improvement.
Solution --> We can use "set forceplan on" to force the join order. Though we will have table scan on Table1 and Table2 because of "forcing", we still could get the performance improvement by avoiding index scan on the large tables Table1 and Table2 because table scan on less number of records (due to filteration by #tmp table) can be faster than index scan on large number of records.
#tmp > Table1 > Table2>
but query plan would be making join order as below
Table1 > Table2 > #tmp
and it would be causing a whole index scan on both the tables Table1 and Table2 (which is as per the people expectation) before we use temp table to filter out results. However, still we can get performance improvement.
Solution --> We can use "set forceplan on" to force the join order. Though we will have table scan on Table1 and Table2 because of "forcing", we still could get the performance improvement by avoiding index scan on the large tables Table1 and Table2 because table scan on less number of records (due to filteration by #tmp table) can be faster than index scan on large number of records.
Friday, January 28, 2011
Joins Algorithms
Source : sybase.com and www
Time Complexity of Nested Loop Join Algo : O(N * M)
Time Complexity of a HASH JOIN is O(N + M), where N is the hashed table and M the is lookup table. Hashing and hash lookups have constant complexity.
Time Complexity of a MERGE JOIN is O(N*Log(N) + M*Log(M)): it's the sum of times to sort both tables plus time to scan them.
Hash Join Algo
====================
The hash join algorithm builds an in-memory hash table of the smaller of its two inputs, and then reads the larger input and probes the in-memory hash table to find matches, which are written to a work table. If the smaller input does not fit into memory, the hash join operator partitions both inputs into smaller work tables. These smaller work tables are processed recursively until the smaller input fits into memory.
The hash join algorithm has the best performance if the smaller input fits into memory, regardless of the size of the larger input. In general, the optimizer will choose hash join if one of the inputs is expected to be substantially smaller than the other.
Merge Join Algo
==================
The merge join algorithm reads two inputs which are both ordered by the join attributes. For each row of the left input, the algorithm reads all of the matching rows of the right input by accessing the rows in sorted order.
If the inputs are not already ordered by the join attributes (perhaps because of an earlier merge join or because an index was used to satisfy a search condition), then the optimizer adds a sort to produce the correct row order. This sort adds cost to the merge join.
One advantage of a merge join compared to a hash join is that the cost of sorting can be amortized over several joins, provided that the merge joins are over the same attributes. The optimizer will choose merge join over a hash join if the sizes of the inputs are likely to be similar, or if it can amortize the cost of the sort over several operations.
Nested Loops Join Algo
==================
The nested loops join computes the join of its left and right sides by completely reading the right hand side for each row of the left hand side. (The syntactic order of tables in the query does not matter, because the optimizer chooses the appropriate join order for each block in the request.)
The optimizer may choose nested loops join if the join condition does not contain an equality condition, or if it is optimizing for first-row time.
Since a nested loops join reads the right hand side many times, it is very sensitive to the cost of the right hand side. If the right hand side is an index scan or a small table, then the right hand side can likely be computed using cached pages from previous iterations. On the other hand, if the right hand side is a sequential table scan or an index scan that matches many rows, then the right hand side needs to be read from disk many times. Typically, a nested loops join is less efficient than other join methods. However, nested loops join can provide the first matching row quickly compared to join methods that must compute their entire result before returning.
Nested loops join is the only join algorithm that can provide sensitive semantics for queries containing joins. This means that sensitive cursors on joins can only be executed with a nested loops join.
Time Complexity of Nested Loop Join Algo : O(N * M)
Time Complexity of a HASH JOIN is O(N + M), where N is the hashed table and M the is lookup table. Hashing and hash lookups have constant complexity.
Time Complexity of a MERGE JOIN is O(N*Log(N) + M*Log(M)): it's the sum of times to sort both tables plus time to scan them.
Hash Join Algo
====================
The hash join algorithm builds an in-memory hash table of the smaller of its two inputs, and then reads the larger input and probes the in-memory hash table to find matches, which are written to a work table. If the smaller input does not fit into memory, the hash join operator partitions both inputs into smaller work tables. These smaller work tables are processed recursively until the smaller input fits into memory.
The hash join algorithm has the best performance if the smaller input fits into memory, regardless of the size of the larger input. In general, the optimizer will choose hash join if one of the inputs is expected to be substantially smaller than the other.
Merge Join Algo
==================
The merge join algorithm reads two inputs which are both ordered by the join attributes. For each row of the left input, the algorithm reads all of the matching rows of the right input by accessing the rows in sorted order.
If the inputs are not already ordered by the join attributes (perhaps because of an earlier merge join or because an index was used to satisfy a search condition), then the optimizer adds a sort to produce the correct row order. This sort adds cost to the merge join.
One advantage of a merge join compared to a hash join is that the cost of sorting can be amortized over several joins, provided that the merge joins are over the same attributes. The optimizer will choose merge join over a hash join if the sizes of the inputs are likely to be similar, or if it can amortize the cost of the sort over several operations.
Nested Loops Join Algo
==================
The nested loops join computes the join of its left and right sides by completely reading the right hand side for each row of the left hand side. (The syntactic order of tables in the query does not matter, because the optimizer chooses the appropriate join order for each block in the request.)
The optimizer may choose nested loops join if the join condition does not contain an equality condition, or if it is optimizing for first-row time.
Since a nested loops join reads the right hand side many times, it is very sensitive to the cost of the right hand side. If the right hand side is an index scan or a small table, then the right hand side can likely be computed using cached pages from previous iterations. On the other hand, if the right hand side is a sequential table scan or an index scan that matches many rows, then the right hand side needs to be read from disk many times. Typically, a nested loops join is less efficient than other join methods. However, nested loops join can provide the first matching row quickly compared to join methods that must compute their entire result before returning.
Nested loops join is the only join algorithm that can provide sensitive semantics for queries containing joins. This means that sensitive cursors on joins can only be executed with a nested loops join.
Thursday, January 13, 2011
Performance Tuning - Scenario Based
Scenario 1
------------
If an application makes multiple statement and each statement calls to a specific stored procedure. Thereby, being a multithreaded application it will call stored procedure multiple times at a time. It has been noticed that for some calls procedure takes longer time for different set of parameters. What could be the issue?
Solution 1
-----------
First we can’t blame sqarely on stored procedure for the slower performance. We have few options here
1. Reduce the number of concurrent calls to stored procedure.
2. Check whether the procedure is calling any child procedure with input parameter or not. Suppose child procedure is looking for location and if parent procedure is passing location as an input parameter and while the absence of this input parameter child procedure might be processing all available locations, which could be in huge number, and consuming time. So before calling the child procedure, apply proper check which would reduce the time consumptions for remaining calls.
------------
If an application makes multiple statement and each statement calls to a specific stored procedure. Thereby, being a multithreaded application it will call stored procedure multiple times at a time. It has been noticed that for some calls procedure takes longer time for different set of parameters. What could be the issue?
Solution 1
-----------
First we can’t blame sqarely on stored procedure for the slower performance. We have few options here
1. Reduce the number of concurrent calls to stored procedure.
2. Check whether the procedure is calling any child procedure with input parameter or not. Suppose child procedure is looking for location and if parent procedure is passing location as an input parameter and while the absence of this input parameter child procedure might be processing all available locations, which could be in huge number, and consuming time. So before calling the child procedure, apply proper check which would reduce the time consumptions for remaining calls.
Subscribe to:
Posts (Atom)