use ssh to run shell script on a remote machine

Also, don’t forget to escape variables if you want to pick them up from the destination host.

This has caught me out in the past.

For example:

user@host> ssh user2@host2 “echo \$HOME”
prints out /home/user2

while

user@host> ssh user2@host2 “echo $HOME”
prints out /home/user

Another example:

user@host> ssh user2@host2 “echo hello world | awk ‘{print \$1}'”
prints out “hello” correctly.

Node.js V0.12新特性之给子进程的同步API

尽管发明Node.js的初衷主要是为了编写Web服务器,但开发人员又发现了其他适用(和不适用!)Node的用途。令人觉得惊喜的是,这些用途中有一个是编写shell脚本。并且那确实有意义:Node的跨平台支持已经相当好了,既然前端和后端都用JavaScript写了,如果构建系统也用JavaScript写不是更好吗,对吧?

异步对shell脚本的坏处

在这一用途上值得称道的库是Grunt,它是构建在ShellJS之上的。然而ShellJS有一块硬骨头要啃:Node迫使它用异步I/O。尽管对于Web服务器来说异步I/O很棒,因为它必须随时做出响应,但对于需要逐步执行的shell脚本来说,异步I/O意义不大。

所以,ShellJS的作者们发现了一个“有趣的”解决办法,让它可以运行一个shell命令,然后等着命令完成。大致上是下面这样的代码:

var child_process = require(‘child_process’);
var fs = require(‘fs’);

function execSync(command) {
// 在子shell中运行命令
child_process.exec(command + ‘ 2>&1 1>output && echo done! > done’);

// 阻塞事件循环,知道命令执行完
while (!fs.existsSync(‘done’)) {
// 什么都不做
}

// 读取输出
var output = fs.readFileSync(‘output’);

// 删除临时文件。
fs.unlinkSync(‘output’);
fs.unlinkSync(‘done’);

return output;
}
换句话说,在shell执行你的命令时,ShellJS依然在运行,并持续不断地轮询着文件系统,检查是否能找到表明命令已经完成的那个文件。有点儿像驴子。

这种效率低下又丑陋不堪的解决办法让Node核心团队受刺激了,实现了一个真正的解决方案 – Node v0.12最终应该会支持同步运行子进程。实际上这个特性已经在路线图上放了很长时间了– 我记得是在2011年的JSConf.eu上(!) ,跟现在已经退休的Node维护者Felix Geisendoerfer坐在一起,勾勒出了一个实现execSync的办法。在过了两年多以后,这一特性现在终于出现在了master分支上。

恭喜,ShellJS的人们挑了一个很好的刺儿! 🙂

同步对shell脚本的好处

我们刚加上的API spawnSync跟它的异步小伙伴类似,它提供的底层API让你可以完全掌控子进程的设置。它还会返回所有我们能够收集的信息:退出码、终止信号、可能的启动错误,以及这个进程的全部输出。当然,在流中使用spawnSync没有任何意义-它是同步的,所以事件处理器不能在进程退出前运行-所以进程的所有输出会被缓冲到一个单例字符串或缓冲对象中。

并且就像众所周知的exec(运行shell命令)和execFile(用于运行一个可执行文件)方法一样,我们为常见的情况添加了execSync和execFileSync,它们比spawnSync更易用。如果你用了这些API,Node会假定你关心的只是进程写到stdout中的数据。如果进程或shell返回了非零的退出码,node会认为出现错误了,exec(Sync)会抛出。

比如获取项目git历史的代码就像下面这样简单:

var history = child_process.execSync(‘git log’, { encoding: ‘utf8’ });
process.stdout.write(history);
现在你可能在想“怎么要用这么长时间?”从表面上看,启动一个子进程并读取它的输出看起来简直是小菜一碟。也确实是这样-如果你只关心非常常见的情况。但是,我们不想做出来的解决方案只是一半。

当需要同时发送输入并读取一或多个输出流时,有两个选择:用线程-或者用事件循环。比如Python的实现,我们发现他们或者用事件循环(在Unix系的平台上)或者用线程(在Windows上)。并且它的实现可真不是一碟小菜。

2011年我们就意识到Node已经有一个非常棒的事件循环库了,即libuv。理论上已经具备了实现这一特性的所有条件。然而总是有或大或小的问题,让它并不能真正可靠地工作。

比如说,当子进程退出时,kernel会给node发送一个SIGCHLD信号通知它,但当有多个事件循环存在时,有很长一段时间libuv都不能正确处理信号。还有,删除事件循环并且不留下堆栈跟踪的能力也是最近才加上的。之前Node根本不管,它只是在某点退出,然后让OS打扫战场。如果我们需要一个临时的事件循环,并且在不需要它后仍然继续运行,这种策略就不太合适了。

慢慢的,随着时间的推移,所有这些问题都被解决了。所以如果你现在再设法看看过去那些缓冲区管理、参数解析、超时处理等诸如此类的东西,你会发现这个特性的核心只是一个事件循环,带子进程、计时器,还有一堆附着在它上面的管道。

如果你不关心它都是如何运作的,只需要看看文档,让node为控制子进程提供的丰富选项震你一下吧。现在谁愿意去把ShellJS修好?:)

作者简介

本文最初由Bert Belder发表在StrongLoop上。Bert Belder从2010年就开始做Node.js了,并且他还是libuv的主要编写者之一,Node.js就是在这个库上构建的。他除了是StrongLoop和Node核心的技术领导者,他正在做的特性还会让Node处于创新的最前沿,甚至是在1.0版出来之后。StrongLoop降低了在Node中开发APIs的难度,还添加了监测、集群化以及私有注册的支持等DevOps能力。

查看英文原文:What’s New in Node.js v0.12 – execSync: a Synchronous API for Child Processes 2014年3月12日

Some useful ffmpeg commands ( commandline video converting tool)

1. Converting MOV/MPEG/AVI/MP4 –> flv 
#ffmpeg -i input file output.flv

2. Convert and adjust the video file resolution to output file
# ffmpeg -i input.avi -s 500×500 output.flv

3. Converting 3GP –> FLV
#ffmpeg -i input.3gp -sameq -an output.flv

4. Converting MPEG –>3GP
#ffmpeg -i input.mpeg -ab 8.85k -acodec libamr_wb -ac 1 -ar 16000 -vcodec h263 -s qcif output.3gp

5. Converting WMV –> to MP3
#ffmpeg -i input.wmv output.mp3

6. Converting AMR –> MP3
#ffmpeg -i input.amr -ar 22050 output.mp3

7. Converting FLV –> MP4
# ffmpeg -i input.flv -ar 22050 output.mp4

Advanced flash supported conversion.

#ffmpeg -i input.flv -s 480×360 -vcodec libx264 -b 1900kb -flags +loop -cmp +chroma -me_range 16 -me_method hex -subq 5 -i_qfactor 0.71 -qcomp 0.6 -qdiff 4 -directpred 1 -flags2 +fastpskip -dts_delta_threshold 1 -acodec libfaac -ab 128000 output.mp4

This code will support to play in Iphone.

#ffmpeg -i 400b75e5081fca44d094.flv -s 320×240 -r 24 -b 200k -bt 240k -vcodec libx264 -vpre hq -coder 0 -bf 0 -flags2 -wpred-dct8x8 -level 13 -maxrate 768k -bufsize 3M -acodec libfaac -ac 2 -ar 48000 -ab 192k new_output_new1.mp4

8. Converting AAC –> Mp3
#ffmpeg -i input.aac -ar 22050 -ab 32 output.mp3

Use ffmpeg with it’s best performance

The following arguments helps you to optimize the ffmpeg conversion to get the best result of converting mp4. to flv

# ffmpeg -i input.mp4 -ar 22050 -ab 56 -acodec mp3 -r 25 -f flv -b 400 -s 320×240 output.flv

-i = specify input file
-ar = audio sample rate
-ab = audio bitrate
-acodec = audio codec
-r = framerate
-f = output format
-b = bitrate (of video!)
-s = resize to width x height

10, Convert Video –> JPG Sequence
#ffmpeg -i input.mpg -an -r 10 -y -s 320×240 video%d.jpg

11. Convert Every n seconds to JPEG
#ffmpeg -i input.mpg -r 0.2 -sameq -f image2 thumbs%02d.jpg where r is the time when jpg file created. r can be 0.2 indicate every 5 second, if you need a jpeg on every 45 seconds use 1/45.

11. Crop the video between the specified time
#ffmpeg -i input.mpg -an -ss 00:00:03 -t 00:00:01 -r 1 -y -s 320×240 video%d.jpg where
-ss : Record start time
-t : No. of second to end suppose you want to crop 2 minutes video which starts from the 10 minutes play
-ss 00:10:00 -t 00:02:00
%d : Naming with timestamp

12. Converting flv –> mp3
#ffmpeg -i input.flv -vn -acodec copy output.mp3

13. Converting WAV –> MP3 
#ffmpeg -i Input.wav -ab 128 Output.mp3
#ffmpeg -i input.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 output.mp3

14 : Encode a video sequence for the iPpod/iPhone 
ffmpeg -i input.avi input -acodec aac -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -flags +4mv+trell -aic 2 -cmp 2 -subcmp 2 -s 320×180 -title X output.mp4

15. Converting AVI –> GIF
ffmpeg -i input.avi gif_anime.gif
16. Compress .avi –> divx
#ffmpeg -i input.avi -s 320×240 -vcodec msmpeg4v2 output.avi
Source : input.avi
Audio codec : aac
Audio bitrate : 128kb/s
Video codec : mpeg4
Video bitrate : 1200kb/s
Video size : 320px par 180px
Generated video : output.mp4

17. Converting flv –> Mp4
#ffmpeg -i sample.flv -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -s 320×240 final_video.mp4

some useful resources,

http://www.itbroadcastanddigitalcinema.com/ffmpeg_howto.html

http://ffmpeg.org/faq.html

Apache Kafka 0.8.0发布,高吞吐量分布式消息系统

Apache Kafka 0.8.0版本近日发布。Apache Kafka是源自LinkedIn的一种分布式日志服务,主要用Scala语言开发(少量Java),其实质是高吞吐量而功能简单的消息队列。由于架构设计独特,Kafka与传统消息队列相比,内置分区、复制和容错功能,适合大规模系统。曾有数据表明,Kafka能够每秒发布超过40万条消息。

目前Kafka已经被众多互联网公司如Twitter、Pinterest、Netflix、Tumblr、Foursquare、Square、StumbleUpon、Coursera等广泛应用,主要使用场景包括:消息处理、活动流跟踪、运营数据监测、日志聚合、流处理(与Storm配合)等。

Apache Kafka 0.8.0版本主要改进包括:

更多技术细节请参考版本说明

技术资料

利用Proxy Cache使Nginx对静态资源进行缓存

前言

Nginx是高性能的HTTP服务器,通过Proxy Cache可以使其对静态资源进行缓存。其原理就是把静态资源按照一定的规则存在本地硬盘,并且会在内存中缓存常用的资源,从而加快静态资源的响应。

配置Proxy Cache

以下为nginx配置片段:

proxy_temp_path   /usr/local/nginx/proxy_temp_dir 1 2;

#keys_zone=cache1:100m 表示这个zone名称为cache1,分配的内存大小为100MB
#/usr/local/nginx/proxy_cache_dir/cache1 表示cache1这个zone的文件要存放的目录
#levels=1:2 表示缓存目录的第一级目录是1个字符,第二级目录是2个字符,即/usr/local/nginx/proxy_cache_dir/cache1/a/1b这种形式
#inactive=1d 表示这个zone中的缓存文件如果在1天内都没有被访问,那么文件会被cache manager进程删除掉
#max_size=10g 表示这个zone的硬盘容量为10GB

proxy_cache_path  /usr/local/nginx/proxy_cache_dir/cache1  levels=1:2 keys_zone=cache1:100m inactive=1d max_size=10g;

server {
    listen 80;
    server_name *.example.com;

    #在日志格式中加入$upstream_cache_status
    log_format format1 '$remote_addr - $remote_user [$time_local]  '
        '"$request" $status $body_bytes_sent '
        '"$http_referer" "$http_user_agent" $upstream_cache_status';

    access_log log/access.log fomat1;

    #$upstream_cache_status表示资源缓存的状态,有HIT MISS EXPIRED三种状态
    add_header X-Cache $upstream_cache_status;
    location ~ .(jpg|png|gif|css|js)$ {
        proxy_pass http://127.0.0.1:81;

        #设置资源缓存的zone
        proxy_cache cache1;

        #设置缓存的key
        proxy_cache_key $host$uri$is_args$args;

        #设置状态码为200和304的响应可以进行缓存,并且缓存时间为10分钟
        proxy_cache_valid 200 304 10m;

        expires 30d;
    }
}

安装Purge模块

Purge模块被用来清除缓存

$ wget http://labs.frickle.com/files/ngx_cache_purge-1.2.tar.gz
$ tar -zxvf ngx_cache_purge-1.2.tar.gz

查看编译参数

$ /usr/local/nginx/sbin/nginx -V 

在原有的编译参数后面加上--add-module=/usr/local/ngx_cache_purge-1.2

$ ./configure --user=www --group=www --prefix=/usr/local/nginx \
--with-http_stub_status_module --with-http_ssl_module \
--with-http_realip_module --add-module=/usr/local/ngx_cache_purge-1.2
$ make && make install

退出nginx,并重新启动

$ /usr/local/nginx/sbin/nginx -s quit
$ /usr/local/nginx/sbin/nginx

配置Purge

以下是nginx中的Purge配置片段

location ~ /purge(/.*) {
    #允许的IP
    allow 127.0.0.1;
    deny all;
    proxy_cache_purge cache1 $host$1$is_args$args;
}

清除缓存

使用方式:

$ wget http://example.com/purge/uri

其中uri为静态资源的URI,如果缓存的资源的URL为 http://example.com/js/jquery.js,那么访问 http://example.com/purge/js/jquery.js则会清除缓存。

命中率

保存如下代码为hit_rate.sh:

#!/bin/bash
# author: Jeremy Wei <[email protected]>
# proxy_cache hit rate

if [ $1x != x ] then
    if [ -e $1 ] then
        HIT=`cat $1 | grep HIT | wc -l`
        ALL=`cat $1 | wc -l`
        Hit_rate=`echo "scale=2;($HIT/$ALL)*100" | bc`
        echo "Hit rate=$Hit_rate%"
    else
        echo "$1 not exsist!"
    fi
else
    echo "usage: ./hit_rate.sh file_path"
fi

使用方式

$ ./hit_rate.sh /usr/local/nginx/log/access.log

参考:

http://wiki.nginx.org/HttpProxyModule

(完)

 

作者: JeremyWei | 可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明
网址: http://weizhifeng.net/nginx-proxy-cache.html

6 Must Have Node.js Modules

So you’re thinking about using node.js: awesome. If you’re new to the community you’re probably thinking “what’s the best node.js module / library for X?” I think it’s really true when experienced language gurus say “80% of your favorite language is your favorite library.” This is the first in a series of articles will give you a high-level overview of some of our favorite node.js libraries at Nodejitsu. Today we’ll take a look at these libraries:

 

  1. cradle: A high-level, caching, CouchDB library for Node.js
  2. findit: Walk a directory tree in node.js
  3. node_redis: Redis client for node
  4. node-static: RFC2616 compliant HTTP static-file server module, with built-in caching.
  5. optimist: Light-weight option parsing for node.js
  6. xml2js: Simple XML to JavaScript object converter.

 

cradle: A high-level, caching, CouchDB library for Node.js

If you’re using CouchDB you should be using cradle. Cradle stands above the other CouchDB libraries in the node.js community: it has a robust LRU (least recently used) cache, bulk document processing, and a simple and elegant API:

 

//
// Create a connection
//
var conn = new(cradle.Connection)('http://living-room.couch', 5984, {
  cache: true,
  raw: false
});

//
// Get a database
//
var database = conn.database('newyorkcity');

//
// Now work with it
//
database.save('flatiron', {
  description: 'The neighborhood surrounding the Flatiron building',
  boundaries: {
    north: '28 Street',
    south: '18 Street',
    east: 'Park Avenue',
    west: '6 Avenue'
  }
}, function (err, res) {
  console.log(res.ok) // True
});

 

 

findit: Walk a directory tree in Node.js

A common set of problems that I see on the nodejs mailing list are advanced file system operations: watching all the files in a directory, enumerating an entire directory, etc. Recently, when working on my fork of docco to respect directory structure in the documentation produced I needed such a feature. It was surprisingly easy:

 

var findit = require('findit');

findit.find('/dir/to/walk', function (file) {
  //
  // This function is called each time a file is enumerated in the dir tree
  //
  console.log(file);
});

 

 

node_redis: Redis client for Node.js

There have been a lot of redis clients released for node.js. The question has become: which client is the right one to use? When selecting an answer to this question for any library you want to look for a few things including: the author, the recent activity, and the number of followers on GitHub. In this case the author is Matt Ranney, a member of the node.js core team. The most recent commit was yesterday, and the repository has over 300 followers.

Redis is really fast, and extremely useful for storing volatile information like sessions and cached data. Lets take a look at some sample usage:

 

var redis = require("redis"),
    client = redis.createClient();

client.on("error", function (err) {
  console.log("Error " + err);
});

client.set("string key", "string val", redis.print);
client.hset("hash key", "hashtest 1", "some value", redis.print);
client.hset(["hash key", "hashtest 2", "some other value"], redis.print);
client.hkeys("hash key", function (err, replies) {
  console.log(replies.length + " replies:");
  replies.forEach(function (reply, i) {
      console.log("    " + i + ": " + reply);
  });
  client.quit();
});

 

 

node-static: RFC2616 compliant HTTP static-file server module, with built-in caching

I bet you’re wondering “What the $%^@ is RFC2616?” RFC2616 is the standards specification for HTTP 1.1, released in 1999. This spec is responsible for outlining how (among other things) files should be served over HTTP. Thus, when choosing a node.js static file server, its important to understand which libraries are standards compliant and which are not: node-static is. In addition, it has some great built-in caching which will speed up your file serving in highly concurrent scenarios.

Using node-static is easy, lets make a static file server in 7 lines of Javascript:

 

var static = require('node-static');

//
// Create a node-static server instance to serve the './public' folder
//
var file = new(static.Server)('./public');

require('http').createServer(function (request, response) {
  request.addListener('end', function () {
    //
    // Serve files!
    //
    file.serve(request, response);
  });
}).listen(8080);

 

 

optimist: Light-weight option parsing for Node.js

One of the great things about node.js is how easy it is to write (and later publish with npm) simple command-line tools in Javascript. Clearly, when one is writing a command line tool one of the most important things is to have a robust command line options parser. Our library of choice for this at Nodejitsu is optimist by substack.

Lets take a look at a sample CLI script reminiscent of FizzBuzz:

 

#!/usr/bin/env node
var argv = require('optimist').argv;

if (argv.rif - 5 * argv.xup > 7.138) {
  console.log('Buy more riffiwobbles');
}
else {
  console.log('Sell the xupptumblers');
}

 

Using this CLI script is easy:

$ ./node-optimist.js --rif=55 --xup=9.52
Buy more riffiwobbles

$ ./node-optimist.js --rif 12 --xup 8.1
Sell the xupptumblers

This library has support for -a style arguments and --argument style arguments. In addition any arguments passed without an option will be available in argv._. For more information on this library check out the repository on GitHub.

 

xml2js: Simple XML to Javascript object converter

Writing clients in node.js for APIs that expose data through JSON is almost too easy. There is no need for a complex, language-specific JSON parsing library that one might find in languages such as Ruby or Python. Just use the built-in Javascript JSON.parse method on the data returned and voila! you’ve got native Javascript objects.

But what about APIs that only expose their data through XML? You could use the native libxmljsmodule from polotek, but the overhead of dealing with individual XML nodes is non-trivial and (in my opinion) can lead to excess complexity. There is another, simpler option: the lesser knownxml2js library available on npm and GitHub.

Lets suppose that we had some XML (/me dies a little inside):

 

<?xml version="1.0" encoding="UTF-8"?>
<root>
  <child foo="bar">
    <grandchild baz="fizbuzz">grandchild content</grandchild>
  </child>
  <sibling>with content!</sibling>
</root>

 

Parsing this using xml2js is actually surprisingly easy:

 

var fs = require('fs'),
    eyes = require('eyes'),
    xml2js = require('xml2js');

var parser = new xml2js.Parser();

parser.on('end', function(result) {
  eyes.inspect(result);
});

fs.readFile(__dirname + '/foo.xml', function(err, data) {
  parser.parseString(data);
});

 

The output we would see is:

 

{
  child: {
    @: { foo: 'bar' },
    grandchild: {
      #: 'grandchild content',
      @: { baz: 'fizbuzz' }
    }
  },
  sibling: 'with content!'
}

 

If you haven’t already noticed, xml2js transforms arbitrary XML to JSON in the following way:

  • All entity tags like <child> become keys in the corresponding JSON.
  • Simple tags like <sibling>with content</sibling> become simple key:value pairs (e.g. sibling: ‘with content!’)
  • More complex tags like <child>... and <grandchild>... become complex key:value pairs where the value is an Object literal with two important properties:
    1. @: An Object representing all attributes on the specified tag
    2. #: Any text content for this XML node.

This simple mapping can greatly simplify the XML parsing logic in your node.js application and is worth checking out if you ever have to deal with the three-headed dog we all love to hate.

 

Just getting started

This is the first in a series of articles where we will outline at a high level the best-of-the-best for modules, libraries and techniques in node.js that you should be aware of. If you’re interested in writing your own node.js modules and publishing them to npm, check out isaacs new article: How to Module over at howtonode.org.