配置安装pptpd提供两种方式

  • 自己手动去配置;

  • 使用自动脚本(建议,快速搞定,不需要自己去逐个配置)


自动脚本

git clone https://github.com/mouse-lin/pptpd.git

手动配置

  • 安装pptpd

sudu apt-get -y install pptpd
  • ###修改配置脚本
cat >/etc/ppp/options.pptpd <<END
name pptpd
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
ms-dns 8.8.8.8
ms-dns 8.8.4.4
proxyarp
lock
nobsdcomp 
novj
novjccomp
nologfd
END
  • ###修改路由转发
cat >> /etc/sysctl.conf <<END
net.ipv4.ip_forward=1
END
sysctl -p
  • ###修改iptables
iptables-save > /etc/iptables.down.rules

iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE

iptables -I FORWARD -s 192.168.2.0/24 -p tcp --syn -i ppp+ -j TCPMSS --set-mss 1300

iptables-save > /etc/iptables.up.rules

cat >>/etc/ppp/pptpd-options<<EOF
pre-up iptables-restore < /etc/iptables.up.rules
post-down iptables-restore < /etc/iptables.down.rules
EOF
  • ###配置登录vpn用户账号
cat >/etc/ppp/chap-secrets <<END
test pptpd test *
END

  • NULLS LAST

开发时候,我们往往需要对一个表某些字段进行排序,这些字段难免存在为null的值,例如:

@answers = Answer.order("course_id DESC")

注意上面代码,在Mysql里面返回顺序是这样的,存在course_id排在前面,course_id为null排在后面,在PG却相反,所以这个时候需要这样来写:

@ansers = Answer.order("course_id DESC NULLS LAST")
  • Order column 必须被select

还是同样排序问题,排序中我们也经常会joins另外表,对于另外表得字段进行排序,例如:

@answers = Answer.joins(:broadcast_records).order("broadcast_records.id desc")

上面代码无论是在Mysql还是PG都不会出错的,但要是加上uniq或者select(“DISTINCT”),例如:

@answers = Answer.joins(:broadcast_records).order("broadcast_records.id desc").uniq
@answers = Answer.joins(:broadcast_records).order("broadcast_records.id desc").select("DISTINCT answers.*")

这时就会抛出以下的错误(恭喜你,你又可能导致一个线上bug出现了):

PG::InvalidColumnReference: ERROR:  for SELECT DISTINCT, ORDER BY expressions must appear in select list

根据上面错误,就是排序字段必须在select list里面,所以我们可以这样来更改:

@answers = Answer.joins(:broadcast_records).order("broadcast_records.id desc").uniq.select("answers.*, broadcast_records.id")

现在微信越来越火了,很多HTML5 Page直接嵌入到微信的WebView里面打开, 都会涉及到这个页面被分享出去时,标题、图片、链接、介绍等设置。

微信分享比较主要有两种:

  • 好友圈
  • 好友

首先,我们先在页面里面用Javascript定义我们要被分享参数:

var imgUrl = '图片url';
var lineLink = '分享后,点击跳转链接';
var descContent = "介绍";
var shareTitle = '标题';
var appid = 'wxc9937e3a66af6dc8'; //appid,也可以不带,带了的话底下会有对应认证应用图标

然后就是定义三个不同分享方法:

function shareFriend() {
    WeixinJSBridge.invoke('sendAppMessage',{
                            "appid": appid,
                            "img_url": imgUrl,
                            "img_width": "640",
                            "img_height": "640",
                            "link": lineLink,
                            "desc": descContent,
                            "title": shareTitle
                            }, function(res) {
                            _report('send_msg', res.err_msg);
                            })
}
function shareTimeline() {
    WeixinJSBridge.invoke('shareTimeline',{
                            "img_url": imgUrl,
                            "img_width": "640",
                            "img_height": "640",
                            "link": lineLink,
                            "desc": descContent,
                            "title": shareTitle
                            }, function(res) {
                            _report('timeline', res.err_msg);
                            });
}
function shareWeibo() {
    WeixinJSBridge.invoke('shareWeibo',{
                            "content": descContent,
                            "url": lineLink,
                            }, function(res) {
                            _report('weibo', res.err_msg);
                            });
}

这里要注意是,android客户端,参数即使为空都必须带入,不然客户端会分享出不来

当微信内置浏览器完成内部初始化后会触发WeixinJSBridgeReady事件:

document.addEventListener('WeixinJSBridgeReady', function onBridgeReady() {

        // 发送给好友
        WeixinJSBridge.on('menu:share:appmessage', function(argv){
            shareFriend();
            });

        // 分享到朋友圈
        WeixinJSBridge.on('menu:share:timeline', function(argv){
            shareTimeline();
            });

        // 分享到微博
        WeixinJSBridge.on('menu:share:weibo', function(argv){
            shareWeibo();
            });
}, false);

Here be a sample post with a custom background image. To utilize this “feature” just add the following YAML to a post’s front matter.

image:
  background: filename.png

This little bit of YAML makes the assumption that your background image asset is in the

1
/images
folder. If you place it somewhere else or are hotlinking from the web, just include the full http(s):// URL. Either way you should have a background image that is tiled.

If you want to set a background image for the entire site just add

1
background: filename.png
to your
1
_config.yml
and BOOM — background images on every page!

Background images from Subtle Patterns (Subtle Patterns) / CC BY-SA 3.0


ActiveSupport::Cache::Store(Rails cache)

  • fetch

According to rails api: Fetches data from the cache, using the given key. If there is data in the cache with the given key, then that data is returned.

If there is no such data in the cache (a cache miss), then nil will be returned. However, if a block has been passed, that block will be passed the key and executed in the event of a cache miss. The return value of the block will be written to the cache under the given cache key, and that return value will be returned.

1
2
3
4
5
6
7
8
9
cache.write('today', 'Monday')
cache.fetch('today')  # => "Monday"

cache.fetch('city')   # => nil
cache.fetch('city') do
		'Duckburgh'
end

cache.fetch('city')   # => "Duckburgh"

You may also specify additional options via the options argument. Setting force: true will force a cache miss:

1
2
cache.write('today', 'Monday')
cache.fetch('today', force: true)  # => nil

Setting :compress will store a large cache entry set by the call in a compressed format.

Setting :expires_in will set an expiration time on the cache. All caches support auto-expiring content after a specified number of seconds. This value can be specified as an option to the constructor (in which case all entries will be affected), or it can be supplied to the fetch or write method to effect just one entry.

1
2
cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 5.minutes)
cache.write(key, value, expires_in: 1.minute) # Set a lower value for one entry

Setting :race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load seven different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in :race_condition_ttl. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a big longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting new value. The key is to keep :race_condition_ttl small.

If the process regenerating the entry errors out, the entry will be regenerated after the specified number of seconds. Also note that the life of stale cache is extended only if it expired recently. Otherwise a new value is generated and :race_condition_ttl does not play any role.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Set all values to expire after one minute.
cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 1.minute)

cache.write('foo', 'original value')
val_1 = nil
val_2 = nil
sleep 60

Thread.new do
  val_1 = cache.fetch('foo', race_condition_ttl: 10) do
    sleep 1
    'new value 1'
  end
end

Thread.new do
  val_2 = cache.fetch('foo', race_condition_ttl: 10) do
    'new value 2'
  end
end

# val_1 => "new value 1"
# val_2 => "original value"
# sleep 10 # First thread extend the life of cache by another 10 seconds
# cache.fetch('foo') => "new value 1"

Other options will be handled by the specific cache store implementation. Internally, fetch calls read_entry, and calls write_entry on a cache miss. options will be passed to the read and write calls.

For example, MemCacheStore’s write method supports the :raw option, which tells the memcached server to store all values as strings. We can use this option with fetch too:

cache = ActiveSupport::Cache::MemCacheStore.new cache.fetch(“foo”, force: true, raw: true) do :bar end cache.fetch(‘foo’) # => “bar”

source code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def fetch(name, options = nil)
  if block_given?
  	options = merged_options(options)
  	key = namespaced_key(name, options)
  
  	cached_entry = find_cached_entry(key, name, options) unless options[:force]
  	entry = handle_expired_entry(cached_entry, key, options)
  
  	if entry
    get_entry_value(entry, name, options)
  	else
    save_block_result_to_cache(name, options) { |_name| yield _name }
    end
  else
    read(name, options)
  end
end

Read it on Rails API