Skip to content

1 On 1 Fitness Training

June 11, 2013

Promo Page for Mike Tootelian’s Gym in Bucktown

http://www.1on1fitnesstraining.com/summer2013/

Here is the thank you page

http://www.1on1fitnesstraining.com/summer2013/thanks.html

 

How to Install Node.JS in Ubuntu with GIT

December 19, 2012

Step 1: Open Terminal
Step 2: Install dependencies as follows:

> sudo apt-get install build-essential git-core libssl-dev libssl0.9.8

Step 3: Download Node
> git clone git://github.com/joyent/node

Next, go to the folder that you’ve just downloaded as follows:
> cd node

Step 4: Go to NodeJs 0.8.4
> git checkout v0.8.4

Step 5: Compile and Install NodeJs

> ./configure
> make
> sudo make install

Step 6: Type node -v to verify that Node is installed.

The terminal should give you: v0.8.x

How to Create an Amazon EC2 Instance and Install Node.JS and Express

December 19, 2012

Create an Amazon EC2 Instance.
Choose Ubuntu Server 12.04 LTS 64 bit
Create Key Pair
Security Group: Open 3 TCP ports: 22, 80 and 9123.
Connect to instance: ssh -i keypair.pem ubuntu@publicdns
sudo apt-get update
sudo apt-get install libssl-dev g++ make
Go to nodejs.org/download/
find url for the source code (last item). Copy link address to clipboard to paste it on the terminal.

terminal
wget [url_of_source_code] [enter]
tar -xvf [file_name, in this case node-v0.8.9.tar.gz]
type ls [enter] to view node version directory
remove tar file as follows:
rm node-v0.8.9.tar.gz
cd node-v0.8.9
ls [enter]
from dir node-v0.8.9 run the following line of code:
./configure && make && sudo make install [enter]
[this takes a while]
Node should be built now.

cd..
mkdir site1
cd site1
vim file1.js [enter]

12 minutes.

How to Configure a LAMP Server and phpmyadmin in Amazon EC2

December 18, 2012

Go to aws.amazon.com
If you haven’t already, create an account with aws.amazon.com Credit card required. The first year of service is free (see details).
Instances (Running Instances) are “the server” or “VPS”.
There are various sizes. Micro, Large, etc. We need to select “Micro” to get the first year for free.
Also, for the Micro instances to be free for the first year, we need to select an OS other than, Windows or Suse Enterprise. If we select Windows or Suse Enterprise as OS, we will get charged even during the first year.
EBS Volumes: Are like the hard drive of our virtual computer. We have a size (15 GB).
Key Pairs: The only way to initiate a session. They work as the keys to a house. We can later set up a user and password to access without the key pair.
Elastic IP: a public ip that is not associated to a particular instance of a server. If we don’t associate the ip to an instance, we will get charged by aws.amazon.com for not having the ip associated. We need to associate an Elastic IP to an instance to get the free service tier.
What is an Elastic IP? More info here.
EBS snapshots: backups of our instance.
Security Groups: Here is where we open ports so that internet can go through Amazon’s firewall and access our instance.
Launch Instance.
Choose an AMI
Tab named: Quickstart shows the OS that are supported and maintained by Amazon. The flavor of Linux that is used by Amazon is one that runs better in Amazon’s infrastructure but we will not use that flavor of Linux because it is harder to configure. We will use Ubuntu. (This is very important).
Ubuntu is easy to configure, stable, and there is nothing wrong with it.
If we use SUSE we need to pay about $ 0.3 cents per hour. If we use Windows, we pay $ 0.03 per hour.
If we use Cent OS, we don’t pay anything. In this case we will use Ubuntu.
Ami is the virtual machine that will get installed in our instance. Let’s flip over to the tab labeled “Community AMIs”. Community AMIs are virtual machines created by the community. In this case we will use a public AMI from http://www.alestic.com
In this case we will use Ubuntu 10.10 Maverick EBS boot (publisher: canonical, user: ubuntu@, server 32-bit ami-508c7839).
We search for ami-508c7839 in the Community amis filter and select it.
We make sure it is EBS and it is Ubuntu.

note: we could also have used from the Quickstart the Ubuntu Server 12.04 LTS which is free tier eligible. (Free tier eligibles are marked with a star next to them).

By default, the Small is selected. We need to change that and select the Micro Instance.
We click continue. Key / Value => We give the instance a Name in the “Value” field. So we have the Key called “Name” and the value “Name_of_Your_Instance”.
Select.
We need to create the key pair. Create New Key Pair. Give it a name. Click “create and download key”. The key will be stored in your downloads folder. You can move it to a place you will remember. Don’t lose it. Remember the path to the key. E.g. Documents/mykey.pem
Security Groups: You can use the default or create a new Security Group. Security group is where you open ports for the internet to go through the Amazon Firewall and reach your web server.
Name your security group => required
Describe your security group => optional
Open access to HTTP, HTTPS, SSH. (We don’t need My SQL Access if the server of My SQL is inside the server that we are building. (Don’t open access).
Hit continue.
We see a summary, click launch.
The instance will have a unique number, the AMI that we used.
Click on the instance. Copy the public DNS.
Go to the terminal on the MAC or Putty if using Windows.
Use the following command to connect to the instance:
ssh -i Documents/ ubuntu@
We will have only about 4.6% of 15 GB used by the OS.
It will say when we connected to the instance last (last login).
It shows an internal IP for the Amazon network to communicate between instances. This IP is using 26 % of the ram memory.
Because ubuntu@ is not root, we have to execute sudo, as follows. root is deactivated in this instance. We will update the server.
sudo aptitude update (this will update all of the repositories). => this same task we will have to do regularly.
Answer y to accept and download the update.
We will now install the LAMP stack (Linux, Apache, MySQL and PHP):
sudo tasksel
select LAMP. You select with the space bar. Tab to “OK”. Enter.
We are asked for a password for the user “root” of mySQL. The password has to be secured. We need to enter it again.
Now we have a LAMP server running on the Ubuntu instance.
Now if we copy and past the public DNS to the browser, we should get the page “It Works”…”This is the default page for this server”…”The web server software is running but no content has been added, yet”. This page means that apache (part of the LAMP stack) is working.
We will now install phpmyadmin:
sudo aptitude install phpmyadmin
Answer yes.
Select Apache with space bar. “Configure database for phpmyadmin…” Answer yes (Tab, Enter). We will be asked for a password for the user “root”.
Make note of the password, we will need it to work with phpmyadmin.
Once the installation is finished, we go to the browser, next to the url of the public dns, we type /phpmyadmin and we should see the phpmyadmin login page.

Will continue…

To install Node.js and Express on Amazon EC2:

How to Make a Twitter Robot Buffer with Google Appscript

June 17, 2012

Proper Uses of this Twitter Application:

1) To delay messages so that you are not bothering your audience with too many updates.

2) To have total control of when in the future your tweet will be sent.

3) To send unique messages instead of spam >> duplicate, repeat messages.

Requirements

1) A Google Account.

2) A Twitter Account.

Helpful:

Some Javascript knowledge, although you can probably just copy and paste my code.

By completing this tutorial, you will be able to create a little program using Javascript that will enable you to type tweets in a google spreadsheet and those tweets will be sent from that spreadsheet and will be published by twitter. You will be able to set specific time-driven triggers, and this is the main reason why you may want to complete this tutorial. If you don’t need or don’t see a use for tweets being sent at specific intervals or at specific points in time in the future, then this tutorial may not be of help to you.

Here are the specific steps:

Step 1

-Register Your Application with Twitter by going to this website and logging in using your Twitter User Name and Twitter Password:

https://dev.twitter.com/apps

This tutorial explains how to register the application it under the heading “Setting Up Twitter”: https://developers.google.com/apps-script/articles/twitter_tutorial. The rest of the tutorial is not necessary at this point, just follow the instructions under “Setting Up Twitter”.

Note 1: The application name cannot contain the word “Twitter”.

Note 2: Make sure your application is set to Read and Write:

Note 3: Make sure your application Callback URL is set to https://spreadsheets.google.com/macros, as follows.

Callback URL https://spreadsheets.google.com/macros

 

Next, Obtain Your Consumer Secret and Consumer Key.

In this case, my consumer key and consumer secret are:

Consumer key ZZfsABbeY81PUrMNiWUpXA
Consumer secret wBPxTuZtuPYo6DMzkCYZGqOR5mAPUtYYM59oPQcHTY4

(I will delete this application after the tutorial so please don’t use the above key and secret).

Step 2

-Go to Google Documents and log in to your account.

-Create a new spreadsheet.

-Give it a name, like Google Buffer Demo.

- Open the Script Editor:

Copy and Paste the following code (pay attention and change the second and third line of the code with the key and secret string of characters obtained in step 1.


function sendTweet(){

var TWITTER_CONSUMER_KEY = “Insert the Key Obtained in Step 1″;
var TWITTER_CONSUMER_SECRET = “Insert the Secret Obtained in Step 1″;
var oauth = false;

function authTwitter(){

var oauthConfig = UrlFetchApp.addOAuthService(“twitter”);
oauthConfig.setAccessTokenUrl(“https://api.twitter.com/oauth/access_token”);
oauthConfig.setRequestTokenUrl(“https://api.twitter.com/oauth/request_token”);
oauthConfig.setAuthorizationUrl(“https://api.twitter.com/oauth/authorize”);
oauthConfig.setConsumerKey(TWITTER_CONSUMER_KEY);
oauthConfig.setConsumerSecret(TWITTER_CONSUMER_SECRET);
};

var requestData = {
“method”: “POST”,
“oAuthServiceName”: “twitter”,
“oAuthUseToken”: “always”
};

var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet(); // get active sheet
var tweet = sheet.getActiveCell().getValue();

var encodedTweet = encodeURIComponent(tweet);

if (tweet!=””) {

if (!oauth) {
authTwitter();
oauth = true;
};

UrlFetchApp.fetch(“https://api.twitter.com/1/statuses/update.json?status=” + encodedTweet, requestData);

sheet.deleteRow(1);

}

};

Step 3

1) Write Your Tweets in Column 1. You can use the function “len” in column B to track how many characters your tweet has.

The script works like this: The next tweet in the queue is the text in A1, the second in the order is A2, the third A3… you can write as many as you want, always in column A and going down the rows. Each Cell is 1 Tweet. Each time a tweet is sent, by the Google Script, the first row is deleted and the next tweet takes the place of A1 and is going to be the next tweet to be sent. Each time the code is triggered, a tweet will be sent and the first row will be deleted. The nice thing about Google Appscripts is that you can set your own triggers, which takes us to Step 4:

Note: remember not to have 2 browser tabs open while you try to test the script. Only have the spreadsheet and script editor opened in one browser tab!

Step 4

Use the script triggers! This is where the fun lies! Just set your triggers by going to “Resources” and then Click “All Your Triggers”. I believe the process is self explanatory. Important: Don’t abuse the triggers by setting too-frequent updates! You will probably annoy your audience that way.

Step 5 (Optional, Experimental)

Set a for or while loop in the script so that you send multiple tweets in each run of the program. I am still experimenting with running for loops from the Google Appscript to the Twitter API and I am getting some errors, so this is more of a potential for future improvement of the application.

If you have any questions about any of this, let me know and I will try to answer those questions! Have fun!

Video Breakdown of the process:

Part 1 

Part 2

Part 3

Follow

Get every new post delivered to your Inbox.