Wednesday, November 20, 2013

Barriers or excuses for not moving to cloud

Since the time I have been going hard on learning and experimenting the cloud computing with the leading IaaS provider - AWS - I have always been thinking what stops CIOs from moving to the cloud. In the due course of reading papers, talking to people, I got some points what they call barriers:


  • Performance and scalability - If a CIO thinks that one cannot move to cloud and be as scalable and high performing, I have to say that he/she needs to be educated on some of the fundamentals on cloud computing. Auto-scaling (Up and Down) automatically - THE biggest asset! How can someone prefer leaving their infrastructure idle or shutting down services to provide more server resources in case the demand peaks to take the Autoscaling opportunity provided by cloud providers such as AWS?? Scientific researchers, NASA live telecast from MARS Rover, banking industries, blue chips companies - please - they all are in cloud albeit hybrid model but they are and dont say that their cloud infrastructure is not performing as good. 
  • Infrastructure "Type" - Really?? At the end of the day do you want "T5", IBM XIV, Solaris BLADE, etc etc or you are looking for the same PERFORMANCE?? Are you really looking for managing those infrastructure no matter how many times you have to have the outages or do you want highly scalable, available and durable infrastructure with bare minimum utility based costing model - NO MATTER WHETHER THEY RUN YOUR SERVICES ON CHEAP INFRASTRUCTURE! Do you really care??
  • Security - Funny! Its one of the MAIN reasons moving to the cloud. The level of security AWS can provide, pretty much impossible to achieve unless you move away from your CORE business which may be selling a product and just manage secure infrastructure :) Please read 100s of pages from AWS Security White Paper - Each and Every service is SECURE - aside from app level and firewall rules level security that the coimpanies will provide on top! 

From what I can understand, companies will gradually move away from on-premise infrastrucure ONLY when they will see their competitors gaining extra-ordinary advantage all of a sudden, but Alas! it might already have been late move from them. The whole idea is NOT TO WORRY ABOUT your infrastrcture - that takes MOST of your IT time, resources and money and FOCUS on your CORE business, develop new services, products, deploy them within seconds for your customers, add more agility in your IT environment. Make your infrastrucure more secure, scalable, available with multi zone DR planning already in place within minutes if not seconds. Compare this with on-premise infrastructure maintainance - Dealing with vendors, upgrading Oracle, paying millions (Even for idle infrastructure), waiting for approval - rings a bell??

Monday, August 26, 2013

Setting up Amazon EC2 with MySQL, glassfish and access from within Proxy


Check out the Android OpMedia Application intro
http://www.youtube.com/watch?v=Ta2FIHiTIn0&feature=youtu.be

Download link from Google Play Store
https://play.google.com/store/apps/details?id=com.kiikka.opmedia.android.media.activities&hl=en#!






Tuesday, April 9, 2013

Find out whether IBM AIX file systems are LUNs

Scenario: A developer needs to know whether the file storage system is local or pointing to a LUN. For instance, the file systems he/she works on is /u02. Does it map to a local storage or SAN?

Steps:
1. Issue a mount command to first list out the mounted LOGICAL and/or PHYSICAL file systems and their mount points.

Example:
$ mount
node    mounted  mounted over    vfs       date             options    


-------- ---------------  ---------------  ------ ------------ ---------------
         /dev/hd4         /                jfs2   28 Oct 03:11 rw,log=/dev/hd8
         /dev/hd2         /usr             jfs2   28 Oct 03:11 rw,log=/dev/hd8
         /dev/hd9var      /var             jfs2   28 Oct 03:11 rw,log=/dev/hd8
         /dev/hd3         /tmp             jfs2   28 Oct 03:11 rw,log=/dev/hd8
         /dev/hd1         /home            jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /dev/hd11admin   /admin           jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /proc            /proc            procfs 28 Oct 03:16 rw            
         /dev/hd10opt     /opt             jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /dev/livedump    /var/adm/ras/livedump jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /dev/fslv00      /audit           jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /dev/u01lv       /u01             jfs2   28 Oct 03:16 rw,log=/dev/hd8
         /dev/u03lv       /u03             jfs2   28 Oct 03:16 rw,cio,log=INLINE
         /dev/u06lv       /u06             jfs2   28 Oct 03:16 rw,cio,log=INLINE
         /dev/u02lv       /u02             jfs2   28 Oct 03:16 rw,cio,log=INLINE
         /dev/u07lv       /u07             jfs2   28 Oct 03:16 rw,cio,log=INLINE
tggfilesvr1 /D/TempNFS       /mnt/tggfilesvr1 nfs3   28 Oct 03:16 rw,bg,hard,intr,sec=sys

Clearly, /dev/hd* are the physical drives. However, u**lv seem to be logical volumes created by grouping one or more physical drives together.

2. Issue a command lslv to list the details of a logical volume.
Example:
$ lslv -l u01lv
u01lv:/u01
PV                COPIES        IN BAND       DISTRIBUTION
hdisk7            184:000:000   34%           000:064:000:093:027
hdisk0            184:000:000   34%           000:064:000:101:019


$ lslv -l u02lv
u02lv:/u02
PV                COPIES        IN BAND       DISTRIBUTION
hdisk2            1450:000:000  25%           150:375:200:375:350


Clearly, LV u01lv is made of hdisk7 and hdisk0 whereas u02lv (mounted on u02) is mapped to hdisk2.

3. Now we know that u02 file in question is pointing to physical drive hdisk2. In order to make sure whether it is in fact pointing to SAN, issue following command:

$ lsattr -El hdisk2 |grep lun_id
lun_id          0x2000000000000                        Logical Unit Number ID           False

If there is no SAN involved there will not be any output. Else, there will be one as above.


Thanks for reading.



Friday, March 22, 2013

Integration with File System

Scenario: An ESB hooks up to an application via a file system - mounted or local. It polls the file generated by the system. Once available, it reads it and processes it in order to send it to final destination. Challenge: 1. A partially generated file may get picked up by the ESB. 2. Non-transactional behavior. Options:

Time based polling: This approach is for ESB to wait for a few minutes before polling the file system. Assumption: We know the maximum size of the file that the application will generate and we are 100% sure that the file is completely generated after x number of minutes.

Risk: Suppose we polled for a file at a time t. So, the next polling starts at a time t + x assuming that the next file will be completely written to the disk by the time. However, there was a failure in the application generating the file and by the time it started generating the file again, it is already too late for it to finish writing the file completely by t+x. Thus, ESB gets a half-cooked file.

Conclusion: I think this approach is very risky and likely to generate data inconsistency very often in productive environment.

Size based polling: This approach is for ESB to poll several times before it "intelligently" concludes that the file has been written completely by looking at the file size. Assumption: After n number of polling for the same file, if the file size does not grow, we are 100% sure that the file is completely written by the application. 

Risk: There are chances when while writing the file, the application may fail. If the Exception/Compensation is not properly handled by the application, the partially generated file will not be deleted by the application and will stay there. The ESB after polling n number of times, will assume that the file is completely written because its size remains the same. However, actually it is not complete and might be missing its trailer or header.

Conclusion: This is a much better option that the first. However, even in this case there are some problems as mentioned above. So, unless there is a mechanism where the application gets notified by ESB that it has processed x number of records or there was a problem in this file (Which is difficult in file based Fire-And-Forget kind of asynchronous approach), or a proper exception handling in place within the application, there is always a great risk of losing the data - esp if the file structure and parsing is important from ESB point of view.

Polling for an "OK" file: This approach is for ESB to wait for a 0 KB file with same name as data file's name but appended with a an extension such as ".OK" or ".complete". Assumption: The application and ONLY application knows when it has FINISHED writing a file to the disk.

Risk: If not already implemented in the application, the additional functionality of generating the OK file needs to be implemented. The ESB also in this case will only deal with the archival of the OK file. Hence, it is possible for confusions to arise when there are lots of data file present in the ESB "Inbound" folder. Remember, in our previous options we used to archive the data files in order for ESB to not pick the files in duplicate.

Conclusion: Despite minimum risks involved in this option, in my opinion this should along with Option 2, might be a preferred implementation. Due to non-transactional behavior of the file system, the basic fact is that only an application knows when it finishes writing the file. Hence it is important that it provides another app (ESB) a signal when it finishes writing the file and option 3 is completely based on this theory. These options also assume that the application does not have any other means to communicate with ESB apart from a File System.

Wednesday, November 28, 2012

Convert your Windows 7 laptop into Wireless Hotspot

Situation: You have one internet connection via ethernet cable to your laptop but several Wifi mobile devices that need internet connections. Creating/Configuring a WiFi Virtual Miniport adapter: 1. Open cmd window with Admin privilage. 2. Type in "netsh" and then "wlan" 3. set hostednetwork ssid= mode=allow ssid= key= keyUsage=persistent 4. start hostednetwork 5. This will start the "Wireless Network Connection 2" which is your Wifi Mini Virtual port adapter. 6. Then you share your main LAN connection adapter by opening its property and selecting "Share connection..." checkbox in the "Sharing" tab. Also, select the "Wireless Network Connection 2" in the drop down list. You are done. Now, you should be able to connect your wifi enabled devices with the Access point described by , which should be picked up by the devices automatically.

Friday, August 3, 2012

PowerEdge 2950 Access From ANYWHERE

As I have been setting up and fiddling around with my development environment, I had quite a few learning in the due course. Firs thing I wanted to do is to access my home server from office (Yes, behind firewall, proxy, etc etc). To cut long story short, here are the steps taken:
1. Make a note of the ports on which the programs are running that are to be accessed remotely.
2. I got a DynDNS account already and hence got a public DNS name of my server.
3. Change the setting in the home modem router to forward the incoming TCP/UDP port request to this particular machine on the relevant program's ports. For instance:
Rule1: Forward TCP xxxx to Machine abc on TCP port yyyy.
4. Adjust the OS firewall running on the server machine to accept the connections on the port yyyy.

Then the real fun starts. After making all these configuration, I was able to access the programs using my public DNS name. However, I could not connect to the machine from behind my office firewall. Apparently the only outbound ports open from the proxy are: 80 and 443.

To get around this:
1. Run SSH on your machine on a particular port (May be 22 or may be 3456 does not matter).
2. Configure modem router to forward the request coming on to port 80 or 443 to the port configured in step 1.
3. Start an SSH tunnel from the office pointing to the machine on port 80 or 443. Two ways to start it:
3.1 - Dynamic port configuration that will enable to access the any ports on the remote server outside the proxy. Any program can be accessed by RemoteServer:

3.2 - Local to Remote port forwarding - Such as accept the request on localhost at port xx that will end up in the remote server on port yy. So any program can be accessed by localhost:xx.
4. Once you are able to start the above session, you are able to access your favorite programs over the internet.

Thursday, July 26, 2012

Setting up SVN and Wiki on the one EC2 instance

As part of one of my own Android projects, I tried to configure an EC2 instance trying to setup both a wiki and subversion. Some interesting points were noted when I was trying to run the SVN standalone with svnserve service on default (Or any) port. After creating a security setting with opening the relevant inbound ports on EC2 (Like 80, 8888, 3690, 22, etc), I noted that:

1. When I was able to connect to subversion on 3690, I could not browse to the default apache hope page as it would give an error 503: "Server Temporarily Unavailable".
2. When I was able to setup the Wiki (From Bitnami), I was not at all able to telnet to the svn port (Even though I was using the SAME security group - that is the same inbound port configs).

In these two cases there were two different EC2 instances involved.

Finally, I decided to go with option 2 and instead of running the svnserve, I hosted it on Apache2 using the httpd.conf configuration file for the authentication, and repo root directory creation.

I am using Bitnami Wiki, which I secured using the LocalSetting.php file on the bitnami installation. No one can view the information at this point of time except for the registered users who are the project members. The registration option is also disabled.

Subversion is configured in Apache2's httpd config and can be accessed over HTTP instead of svn protocol. The biggest advantage is that both can be accessed from behind the proxy/firewall as they all are running on port 80. 

Welcome to my blogs

The contents here are my independent point of view. They reflect my thoughts, experience in my professional and social life.

Search This Blog