Thursday, January 16, 2020

Connect and query PostgreSQL using Azure Data Studio

Azure Data Studio is a new cross-platform database tool developed by Microsoft. It runs on Windows, macOS, and Linux. You can use the same tool to connect to PostgreSQL servers as well.

If you're new to this tool, you can download and install it. You need to install the Postgres extension for the Azure Data Studio to connect to PostgreSQL servers.

















Figure-1: Connection Page


















Figure-2: Server details

Please note that if the PostgreSQL is the remote server then you need to make sure postgres is accepting the remote connection and also it allows your machine IP. You can change the listen_address property of the postgresql.conf file to '*' to allow remote connections and also you need to add your machine's IP address in pg_hba.conf file. When adding the IP address it has to be according to the CIDR notation, for example, 192.168.0.15/24.

If you do not perform the above config changes the errors below will be typical.

could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "172.99.234.99" and accepting
TCP/IP connections on port 5432?

FATAL:  no pg_hba.conf entry for host "10.52.99.99", user "postgres", database "postgres", SSL off

References:
What is Azure Data Studio
Quickstart: Connect and query PostgreSQL using Azure Data Studio

Wednesday, December 11, 2019

Challenges with very large databases

We as DBAs always like to work with very large databases (VLDB) because it gives us lot of experience and our resume looks great too. :) But in reality, managing of VLDBs have tremendous challenges. Recently I've been asked to provide some points alone with this topic.

As of this writing, the latest version of SQL Server released to RTM is SQL Server 2019.

These points are given for SQL Server but in general they applies to any database systems.

We need to scale the databases when it gets large amount of data in order to server the applications efficiently. There are two ways to scale a database system. They are;

  1. Horizontal scaling also call it scaling out
  2. Vertical scaling
SQL Server natively support vertical scaling only, meaning we have to put  more CPU, memory and make powerful machines that is why large databases require powerful hardware to run to meet its performance expectations. However, there is a limit for vertical scaling. That is the reason modern databases are being built to support scale out (horizontal scaling) methods. Even in Azure SQL Database supports horizontal scaling but not SQL Server. 

What are the challenges with large databases?

  • Increased recovery time. In the event of a disaster, it takes more time to recover the database using backup restore.
  • Db maintenance tasks become more difficult. Such as index rebuild, index reorganization, update statistics, etc. These db maintenance tasks are necessary to keep the database performance at higher level.
  • Increased database backup times. As part of Disaster Recovery (DR) plan, any database system needs to be backed up. SQL Server has three different backup types, full, diff and t-log backups. With database growth, it increases the time it takes to take a full / diff backups. Taking backups will not block database activities but it consumes resources specially CPU. 
  • Index rebuild becomes almost impossible for very large tables because of the time it takes and the blocking it creates. SQL Server has online rebuild option but that has a penalty at tempdb usage. Therefore we can notice poor query performance.
  • Even update statistics with full scan takes hours for very large tables. Update statistics is less resource intensive operation but it creates blocking and therefore it impacts the application performance. Due to this reason we have to find a suitable maintenance window to this operation. Maintenance windows are becoming very strict due to business reasons so we can’t just have several hours of maintenance window for the  DB stuff.
  • If the database in question participates in transactional replication, in the event of replication rebuild, it takes ages to transfer initial data set to the subscriber. We can use backup / restore method at the subscriber as an alternative option to initialize, however it takes sometime days to copy the backup to the destination server and then restore takes hours. Due to this reason, there will be a high business impact during a replication rebuilding event.
  • Creation of new indexes takes time and resource intensive in very large tables. There are certain situation that we need to introduce new indexes to improve query performance but it requires careful planning and certain time window to make that change to minimize the impact for the applications.
  • Table level schema changes, such as adding a new column, data type change, etc takes lot of time and need careful planning for very large tables. We also need to manage the transaction log during such an event which needs lot of effort and attention from DBAs.

All the points stated about are the challenges that I see with very large databases. 

Some people argue that we can use table partitioning to manage VLDBs. However that does not go away the challenges stated above.

Here is my answer to the table partitioning is not the solution. Table partitioning at the storage level only. But still the compute and memory used for a partitioned table is the same. It has to use same machine’s (host) memory and cpu plus other resources such as network to process the portioned data. Partitioning data will help to improve query performance because then it will access only a subset of data for the query, assuming portioning is done accurately but that does not solve the maintenance nightmare we face with the data growth.
However, table portioning is not a scaling methodology. In actual scale out solution, it will partition data as well as other resources including cpu , memory, network, etc by adding additional nodes. Very good example is the Cassandra cluster. We can keep on adding nodes to the cluster as data grows and each node of the cluster is assigned a portion of data. So that data portion has its own compute, memory and other resources. Hope its clear. 

SQL Server has another feature called Stretch Database. This feature helps here greatly because you can offload subset of data (cold data) to the Azure cloud. As long as you're not frequently using the data offloaded to cloud, you can deal only with the operational data set which is great. 

Cheers!

Tuesday, October 15, 2019

SQLSaturday #908 - Denver


I completed another SQLSat session this week at Denver. It was pleasure to see about 20 folks for my presentation about SQL Server on Kubernetes. I’m really happy to see several sessions around containers which is really good. DBAs are not stepping into this new technology era of virtualization to run SQL Server or any other database workload on kuberneres.



https://twitter.com/denversql/status/1183116130073894913?s=09

Wednesday, April 4, 2018

How to setup AWS Application Load Balancer for Couchbase

Introduction

You might be wondering why we really need a load balancer for Couchbase. Well, that is a reasonable question to ask and I thought of clarifying this matter before we move further into the topic.
The objective here is NOT to use a load balancer to perform its noble task which is to distribute application traffic. The load balancer uses here as a tool to help manage connection string of the Couchbase. At the end of this article you will understand how to manage the Couchbase connection string in fully automated fashion.

How client applications work with Couchbase

Let's go through a brief description on how client applications work with Couchbase server.

Any application that connects to Couchbase has a connection string which consists of one or more cluster nodes. Since Couchbase is a distributed database, a typical production deployment of Couchbase has multiple nodes in a cluster. As a result you need to provide one or more such cluster node IP addresses or host names to the connection string. It is not required to provide all the cluster nodes to the connection string, however it is often best practice to specify at least two host names because if the application could not connect to one, then it will try the next node.

After establishing the connection to Couchbase, the smart client receives a cluster map from the Couchbase. In nutshell, cluster map has the information about Couchbase nodes, data distribution of vBuckets, etc. From that point, smart client knows where to read/write data precisely. This process is called client bootstrap.

However, overtime, Couchbase cluster may shrinks or grows depending on the application load. If that happens, the cluster nodes will be changed. So then what will happen to the connection string you created at initial cluster setup? Those information is not fully accurate now? Of course you could update the connection string manually but in automated environment any manual work would be considered a cheap thing.

To resolve this or let the client application knows the latest cluster nodes with no human intervention, you could use AWS application load balancer.

Typical role of application load balancer

AWS Application Load Balancer-ALB, is one of the new services of their service arsenal. Load balancer typically used as a single point of contact for clients and it distributes incoming application traffic across multiple targets, such as EC2 instances. Well, as we know how applications connecting to Couchbase cluster, we understand the typical use of the load balancer will not exactly fits in this context to distribute the database traffic because to distribute the database traffic, the load balancer knows nothing about the Couchbase.

Use of Application Load Balancer in Couchbase

The following diagram illustrates the basic architecture of ALB used in Couchbase.

Create ALB for Couchbase cluster

It is now common that Couchbase cluster deploys on AWS autoscaling groups which provides lot of benefits and automation friendly environment. Let's go through step by step process on how to create an ALB for Couchbase.

Assume you already have a Couchbase cluster deployed on AWS autoscaling group. 

1. Create an ALB

I'm not going to repeat the steps in AWS docs here. Please follow the link given below to start creating the ALB and I will highlight the things we need to know for Couchbase context while creating it.


In Basic Configuration there are two schemes you can select as seen below;

It's your choice to expose Couchbase admin UI to internet or internal. Depending on that select the Scheme. Let's assume we selected "internet-facing" scheme.

In Availability zones, select your VPC and the subnets that you need to place the ALB. For internet facing scheme, you need to place your ALB in subnets where you have internet access (Internet Gateway). You need to select more than one subnets for high-availability purposes.

2. Create security group

Follow the steps as it is in the page. Recommended rule are given in below link;


Please note, since our ALB is internet-facing, the Inbound Source of the security group should be "0.0.0.0/0".

3.  Create target group

In Configure Routing section, please make sure you change the highlighted section as below;

4. Register target

Remember that our Couchbase cluster runs on AWS autoscaling group. So that we need to register autoscaling group as one of the targets for ALB. In this section you could register individual instances too but that is not what we need here. So just skip this section for now and click on Review.

Review all the settings and if accurate click on Create.

At this point, you have your ALB created. However still your target group is empty because we have not yet registered any EC2 instances to the target group. We will do that in the next step. 

5. Register autoscaling group as the target

Select the autoscaling group which has Couchbase server deployed.

On Details tab, select Edit.
Select the target group which you have just created in above. See screenshot below;

Then click on Save.

6. Verify target group

As soon as you finished with step #5, please switch over to target groups under Load Balancing and select the target group. e.x: test-tg-couchb

On Targets tab you can now notice the Couchbase cluster EC2 instances are registered. See below screenshot.


Since the Couchbase cluster as single instance (in this example), it shows just one instance. In the same section you also can see health check is passed which means ALB is able to communicate with the Couchbase server.

7. Access Couchbase server admin UI

Remember that we created internet-facing ALB so that we can now access the admin UI using the ALB DNS name.


Just copy and paste this URL into the browser to access the Couchbase UI. Make sure the security group you created for the ALB is accepting traffic from internet.

See below that the security group is open to internet traffic.

8. Enable sticky session

If you are having trouble accessing the Web UI then do one more change in the target group to enable the sticky session. This is specially required when you have multiple nodes in the  Couchbase cluster.

Select the desired target group. Under Attributes on Details tab, Edit attributes to enable the sticky sessions as shown below;


Then try the Couchbase Web UI once again.

Setting up Route 53

Earlier we just tried Amazon internally generated URL of the load balancer to access the Couchbase Web UI. Most often when we design things we need to do it neatly. It is always better to have customized URL for each Couchbase cluster so that it is easier to remember as well. You could achieve this by creating a public or private Route53 entry and then point that to the ALB you just created above. You can refer the link below on how to setup Route53 for the load balancer.

How many load balancers you need?

Well, the load balancer has a cost associated with it. Whatever the cloud design you come up with should also need to consider the cost aspect of it too. Having a load balancer for each Couchbase cluster is definitely expensive but thanks to the new ALB, now you could use one ALB to connect to multiple Couchbase clusters using path-based or host-based routing strategy. Below section describes the routing strategy and how to configure it.

Routing strategy

In Amazon web console, select the ALB that you created above. Select the Listeners tab of the load balancer's properties. You could see something like below;


Which means, the load balancer we created has just one listener. Notice the Rules section. Just click on View/edit rules. You could see the Rule Editor like below;


Look at that, we can define rules for ALB. Even though we did not create any explicit rules, there is a default rule created and it is associated to the target group (test-tg-couchb) we just created.

In here, you can add/modify/delete rules. Let's assume you have another target group which is pointing to a different Couchbase cluster that runs on another autoscaling group. Let's call that target group as test-tg-couchb2. Assume that you also created a Route53 entry which pointing to the same ALB that we created above. Let's call that Route 53 entry as couchbase.couchb2.com.

By utilizing a listener rules as shown above, you can tell the listener to divert all the incoming calls to couchbase.couchb2.com to the target group, test-tg-couchb2. Refer the screenshot below;


Let's backup a little bit and try to understand what we just did.

We just added a new listener rule for the same ALB. The new rule is host-based one and it diverts all the incoming traffic to couchbase.couchb2.com Route 53 DNS name to the new target group called, test-tg-couchb2 which essentially pointing to a different Couchbase cluster runs on autoscaling group. That is amazing right.

Refer the links below for more details about Listener Rules.

Couchbase connection string

Now you have setup stable Route 53 DNS name for your Couchbase cluster. The next step is to use that DNS name to connect to your Couchbase cluster just to be clear, we are not going to use the DNS directly to bootstrap the client SDK. We have already seen how it can be used to connect to the Couchbase admin UI.

Below mentioned is the python code snippet that you could use to get latest Couchbase nodes and then use those nodes to build the connection string as you would do it normally. Now you do not have to worry about adding or removing nodes to your Couchbase cluster because there will be no change in the connection string. Everything will be taken care by the ALB.

url='http://couchbase-couchb2.com/pools/nodes'
u = 'admin'
p = 'Ch@ng3m3!'
response = requests.get(url, auth=(u, p))
nodes = json.loads(response.text)['nodes']
hosts=""
for n in nodes:
hosts = hosts + ',' + n['hostname']
hosts = hosts.lstrip(',')
print(hosts)
couchbase://hosts

Summary

In this article we just went through how to use Amazon Application Load Balancer to manage connection string of the Couchbase cluster. We also learnt, how to use single ALB to connect to multiple Couchbase clusters to reduce the cost. While there are many solutions available to handle connection string, you would find this solution as fully automated and easy implementation. Hope you learnt something new.

However Couchbase recommended method is to use DNS SRV record which is described below;


While the DNS SRV method is efficient but it has some management overhead because you need to do some extra programming to manage the DNS SRV records as you add / remove nodes in the cluster. I still believe Couchbase has no developer friendly method to manage connection string in Cloud environment. In my opinion this is something Couchbase needs to implement very soon rather than later.

Thursday, November 16, 2017

Couchbase: Bucket (Database) Architecture

Introduction

In Couchbase, a bucket is similar to a database in other DB systems. Application inserts data directly into a bucket(s) and there is no any other schema or objects inside it. A bucket contains documents.

Wednesday, November 8, 2017

Rack awareness feature in Couchbase

In distributed database systems, data is distributed across many nodes. If you consider Cassandra, it is not uncommon to have a cluster (in Cassandra terms, a ring) with 1000 nodes or even more. These nodes are then grouped into different racks, in cloud terms, different availability zones. The reason is, in the event that a whole rack (availability zone) goes down since the replica partitions are on separate racks, data will remain available. 

When it comes to Couchbase, the so-called rack awareness feature is controlled by using Groups. You can assign Couchbase servers into different Groups to achieve the rack awareness capability. 

If you are provisioning a Couchbase cluster on AWS, you can create the server Groups analogous to the availability zones on AWS. This logical grouping in Couchbase allows administrators to specify that active and replica partitions be created on servers that are part of a separate rack zone. 

See below figure-1 and notice the Couchbase cluster deployed on AWS has two server groups similar to the availability zone. 

Figure-1 - Multi-dimensional Cochbase cluster deployed on AWS

This cluster has two nodes for each service offering for data, index, and query. The servers are logically grouped into two groups, rack-1a and rack-1b which is similar to availability zone 1a and 1b on AWS respectively. As a result, servers are physically arranged in two racks. 

It is recommended to have the number of servers same between the server groups. If there is an unequal number of servers in one server group, the rebalance operation performs the best effort to evenly distribute replica vBuckets across the cluster. 

The rack awareness feature is available only in Enterprise Edition of Couchbase. 

Cheers!

Friday, October 20, 2017

Couchbase - the engagement database system

Today, I just wrapped up the four-day Couchbase administration course from Couchbase. Since I'm still new to this NoSQL world, it is tons of new stuff. Couchbase refers themselves as beyond just a NoSQL database, they call it engagement database system. 

Couchbase also has SQL style query language which they call it as N1QL. Using N1QL you can query the JSON documents stored in Couchbase buckets. 

What is so special about Couchbase with opposed to the other NoSQL databases like Cassandra and MongoDB is, they provide multi-dimensional scalability which is essentially, different database components like data service, query service, index service and full-text services can be scaled in/out independently. This concept makes distributed system even more complex in my opinion. The main advantage of multi-dimensional scaling is, you can select different hardware resources for each service to best suit its workload which is a very good thing. 

Couchbase provides very good free online courses if you want to get familiar with the database system.


So next few months I'll be working on Couchbase closely with automation on AWS platform. 

Cheers!