15 Interesting JavaScript and CSS Libraries for August 2017

https://tutorialzine.com//media/2017/08/interesting-libraries-august-2017.png

Our mission at Tutorialzine is to keep you up to date with the latest and coolest trends in web development. That’s why every month we release a handpicked collection of some of the best resources that we’ve stumbled upon and deemed worthy of your attention.


titanic.gif

Titanic

A set of beautiful SVG icons with very detailed on-hover animations. Unlike most other web icon fonts, this one is actually JavaScript-powered and requires the bodymovin library for exporting After Effects animations to SVG format.


rebass.jpg

Rebass

Rebass is a React UI kit for building responsive web apps. It is made up of over 60 styled-components which are customizable via styled-system-based properties. This keeps styles isolated and reduces the need to write custom CSS.


bootstrap-4.jpg

Bootstrap 4 (Beta)

Bootstrap 4 is now officially in Beta! The new version of the framework brings forth a lot of great changes, including a flexbox-based grid system, new and restyled components, faster ES6 JavaScript plugins, improved documentation, and much more.


hover-buttons.jpg

Hover Buttons

A cool set of HTML buttons with animated on-hover effects. The buttons come in all shapes and sizes and there are a lot of great animations to choose from. The library is made with SCSS so you can easily remove the buttons you don’t need or change the styles to your liking.


react-simple-maps.jpg

React Simple Maps

React components library for creating maps made out of SVG. There are components for adding all kinds of map details like text annotations, markers, and custom colors for each region. Since the maps are SVG based they can be zoomed in and out with great efficiency.


gpu.jpg

Gpu.js

Library for running browser JavaScript code in the GPU. It allows you to execute complex calculations much quicker by compiling specially written JS into shader language that can run on the GPU via WebGL. If WebGL isn’t available the functions fallback to regular JavaScript.


pell-.jpg

Pell

Pell is a super lightweight WYSIWYG text editor for the web. It weights only 1kB, has absolutely no dependencies, and is made up of less than 200 lines of ES6 code. It supports all the needed actions for formatting markdown text, including inserting images and links.


chromeless.png

Chromeless

Web automation framework based on the Headless Chrome platform. Its API and features are very similar to those of other popular tools like PhantomJS and NightmareJS, with the main difference that it runs all test in Chrome’s headless-mode. Work locally or on AWS Lambda.


fitty.png

Fitty

Fitty is a vanilla JavaScript library that changes the font size of text to make it fit into a fixed-width container. It works with all standard web fonts, scaling their size up or down so that they optimally take the available space without line breaks – perfect for titles and other headings.


notifme.jpg

Notif.me

A Node.js library for sending notifications. It works as an all-in-one solution for handling emails, SMS, and push notifications. Each service has multiple providers you can choose from (e.g. SMPT or Sendmail for email, Neximo or Twilio for SMS).


shoelace.png

Shoelace

Shoelace is a super lightweight CSS starter kit that aims to provide a tinier alternative to frameworks like Bootstrap. It doesn’t have too many styles and features, just a solid CSS reset with some helpful UI components. The library’s code is built with CSS variables, making it easy to customize without the need of a preprocessor.


tenserflow.jpg

TensorFire

Framework for running neural networks in the browser. TensorFire is GPU-accelerated via WebGL, which makes it possible to run bigger machine learning models without a problem. The project is still in its early stages but there are already some very promising demos made with it (Gesture Detection Rock Paper Scissors).


vali.jpg

Vali

Admin dashboard template built with Bootstrap, PugJS, Sass, and other modern technologies. Because the project is created with easy customization in mind, all the styles are organized into many independent SASS modules. The template offers many components and widgets, you can check them out in this demo.


botui.jpg

BotUI

A JavaScript framework for building conversational bot interfaces. It has a super simple API that lets you configure the flow of conversations by adding messages, questions, and even form inputs fields for the user to fill in.

If you want to learn more about interactive conversational UI, check out our article Developer’s Introduction To Chatbots.


nanoid.png

Nano ID

Tiny JavaScript library for generating unique IDs. It uses only URL-friendly symbols for the generated strings but there is an option to provide your own alphabet. On the project’s GitHub page you can find some interesting info about the way the library works and the algorithms it uses.

Why Go is my favorite programming language

I strive to respect everybody’s personal preferences, so I usually steer clear of debates about which is the best programming language, text editor or operating system.

However, recently I was asked a couple of times why I like and use a lot of Go, so here is a coherent article to fill in the blanks of my ad-hoc in-person ramblings :-).

My background

I have used C and Perl for a number of decently sized projects. I have written programs in Python, Ruby, C++, CHICKEN Scheme, Emacs Lisp, Rust and Java (for Android only). I understand a bit of Lua, PHP, Erlang and Haskell. In a previous life, I developed a number of programs using Delphi.

I had a brief look at Go in 2009, when it was first released. I seriously started using the language when Go 1.0 was released in 2012, featuring the Go 1 compatibility guarantee. I still have code running in production which I authored in 2012, largely untouched.

1. Clarity

Formatting

Go code, by convention, is formatted using the gofmt tool. Programmatically formatting code is not a new idea, but contrary to its predecessors, gofmt supports precisely one canonical style.

Having all code formatted the same way makes reading code easier; the code feels familiar. This helps not only when reading the standard library or Go compiler, but also when working with many code bases — think Open Source, or big companies.

Further, auto-formatting is a huge time-saver during code reviews, as it eliminates an entire dimension in which code could be reviewed before: now, you can just let your continuous integration system verify that gofmt produces no diffs.

Interestingly enough, having my editor apply gofmt when saving a file has changed the way I write code. I used to attempt to match what the formatter would enforce, then have it correct my mistakes. Nowadays, I express my thought as quickly as possible and trust gofmt to make it pretty (example of what I would type, click Format).

High-quality code

I use the standard library (docssource) quite a bit, see below.

All standard library code which I have read so far was of extremely high quality.

One example is the image/jpeg package: I didn’t know how JPEG worked at the time, but it was easy to pick up by switching between the Wikipedia JPEG article and theimage/jpeg code. If the package had a few more comments, I would qualify it as a teaching implementation.

Opinions

I have come to agree with many opinions the Go community holds, such as:

Few keywords and abstraction layers

The Go specification lists only 25 keywords, which I can easily keep in my head.

The same is true for builtin functions and types.

In my experience, the small number of abstraction layers and concepts makes the language easy to pick up and quickly feel comfortable in.

While we’re talking about it: I was surprised about how readable the Go specification is. It really seems to target programmers (rather than standards committees?).

2. Speed

Quick feedback / low latency

I love quick feedback: I appreciate websites which load quickly, I prefer fluent User Interfaces which don’t lag, and I will choose a quick tool over a more powerful tool any day. The findings of large web properties confirm that this behavior is shared by many.

The authors of the Go compiler respect my desire for low latency: compilation speed matters to them, and new optimizations are carefully weighed against whether they will slow down compilation.

A friend of mine had not used Go before. After installing the RobustIRC bridge using go get, they concluded that Go must be an interpreted language and I had to correct them: no, the Go compiler just is that fast.

Most Go tools are no exception, e.g. gofmt or goimports are blazingly fast.

Maximum resource usage

For batch applications (as opposed to interactive applications), utilizing the available resources to their fullest is usually more important than low latency.

It is delightfully easy to profile and change a Go program to utilize all available IOPS, network bandwidth or compute. As an example, I wrote about filling a 1 Gbps link, and optimized debiman to utilize all available resources, reducing its runtime by hours.

3. Rich standard library

The Go standard library provides means to effectively use common communications protocols and data storage formats/mechanisms, such as TCP/IP, HTTP, JPEG, SQL, …

Go’s standard library is the best one I have ever seen. I perceive it as well-organized, clean, small, yet comprehensive: I often find it possible to write reasonably sized programs with just the standard library, plus one or two external packages.

Domain-specific data types and algorithms are (in general) not included and live outside the standard library, e.g. golang.org/x/net/html. The golang.org/x namespace also serves as a staging area for new code before it enters the standard library: the Go 1 compatibility guarantee precludes any breaking changes, even if they are clearly worthwhile. A prominent example is golang.org/x/crypto/ssh, which had to break existing code to establish a more secure default.

4. Tooling

To download, compile, install and update Go packages, I use the go get tool.

All Go code bases I have worked with use the built-in testing facilities. This results not only in easy and fast testing, but also in coverage reports being readily available.

Whenever a program uses more resources than expected, I fire up pprof. See this golang.org blog post about pprof for an introduction, or my blog post about optimizing Debian Code Search. After importing the net/http/pprof package, you can profile your server while it’s running, without recompilation or restarting.

Cross-compilation is as easy as setting the GOARCH environment variable, e.g. GOARCH=arm64 for targeting the Raspberry Pi 3. Notably, tools just work cross-platform, too! For example, I can profile gokrazy from my amd64 computer: go tool pprof ~/go/bin/linux_arm64/dhcp http://gokrazy:3112/debug/pprof/heap.

godoc displays documentation as plain text or serves it via HTTP. godoc.org is a public instance, but I run a local one to use while offline or for not yet published packages.

Note that these are standard tools coming with the language. Coming from C, each of the above would be a significant feat to accomplish. In Go, we take them for granted.

Getting started

Hopefully I was able to convey why I’m happy working with Go.

If you’re interested in getting started with Go, check out the beginner’s resources we point people to when they join the Gophers slack channel. See https://golang.org/help/.

Caveats

Of course, no programming tool is entirely free of problems. Given that this article explains why Go is my favorite programming language, it focuses on the positives. I will mention a few issues in passing, though:

  • If you use Go packages which don’t offer a stable API, you might want to use a specific, known-working version. Your best bet is the dep tool, which is not part of the language at the time of writing.
  • Idiomatic Go code does not necessarily translate to the highest performance machine code, and the runtime comes at a (small) cost. In the rare cases where I found performance lacking, I successfully resorted to cgo or assembler. If your domain is hard-realtime applications or otherwise extremely performance-critical code, your mileage may vary, though.
  • I wrote that the Go standard library is the best I have ever seen, but that doesn’t mean it doesn’t have any problems. One example is complicated handling of comments when modifying Go code programmatically via one of the standard library’s oldest packages, go/ast.

2017 春季最酷的 30 个 Android 库

这里是 30 个我最喜欢的在 2017 年 3 月前新出现的 Android 库。其中一些并没有用于实际产品,但你使用它们可能会得到很多的乐趣。我希望你们喜欢这些库。

下面的顺序不代表排名:

1.Matisse

这是一个漂亮的本地图片、视频选择器。其主要功能有:

  • 选择包括JPEG、PNG、GIF格式的图片和 MPEG、MP4 格式的视频
  • 支持自定义主题,包括两个内置的主题
  • 不同的图片加载器
  • 定义自定义过滤规则
  • 在 Activities 和 Fragments 中操作良好

你可以在代码库的 wiki 中发现更多。

2. Spruce Android Animation Library (安卓动画库)

Spruce 是一个轻量级的动画库,可以帮助排版屏幕上的动画。使用有很多不同的动画库时,开发人员需要确保每个视图都能够在适当的时间活动。 Spruce 可以帮助设计师获得复杂的多视图动画,而不是让开发人员在原型阶段就感到畏惧。

3. MaterialChipsInput

Chips 是 Material Design 中组件,他们被描述为

引用
小却相对复杂的个体,比如联系人。Chip 可以包含一些独立的东西,比如照片、文本、条款、图标或者联系人。

MaterialChipsInput 是在 Android 中实现的那个组件。这个库提供了两个视图:ChipsInputChipView.

4. Grav

该库允许基于点创建多个动画。 你可以很容易地制作出光滑美丽的动画。 README 包含很多示例,因此您可以在此处查看。

5. Litho

Litho 不是库,而是一个框架。它是一个非常强大的框架,以声明的方式构建 UI。它由 Facebook 的开发者开发,所以就算你不想使用它,它仍然值得你去关注它的开发过程。

主要特性包括:

  • 使用申明式 API  来定义 UI 组件。你只需要基于一套固定的输入来描述布局就好,其它事情框架会搞定。
  • 异步布局:Litho 可以在不阻碍 UI 线程的情况下计算并对 UI 布局。
  • 扁平化视图:Litho 使用 Yoga 来布局,并自动缩减 UI中 ViewGroups 的数量。
  • 细粒度回收:UI 中任何像 text 或 image 之类的组件都能被回收再利用。

6. Adaptable Bottom Navigation(自适应的底部导航)

不久前 Google 更新了 Material Design 的计划,介绍了底部导航栏,这是个在应用中让 UI 跟随内容变化的不错的方式。设计支持库(Design Support Library) 中也添加了实现。

用自适应底层导航替换支持库中的 BottomNavigationView 非常简单。它以 ViewPager 和 TabLayout 的工作方式来实现。这里有来自开发团队的一个简短说明:

引用
如前所述,使用 Android 支持库中的底部导航视图需要写很多无聊的的代码切换视图。因此,我们根据 TabLayout 的 setupWithViewPater() 方法,创建了独有特色的 ViewSwapper 组件连接到底层导航视图来以一个简单的方式对视图的显示进行管理。

你可以在 Github 中找到更多相关信息。对于为什么要实现这个东西,那里有详尽的资料和说明(提示:结构清晰)。

7. PatternLockView(图形锁视图)

引用
这个库让你可以在应用中简单快速的实现图形锁机制。这个视图真的是非常易用,它有大量的个性化选项可以用于改变功能和外观,以此满足你的需求。

 

引用
它还支持 RxJava 2 视图绑定,所以如果你喜欢响应式编程(就像我一样),你可以得到用户绘制图形的更新流。

README 中充满了示例,所以入门很容易。

8. Isometric

这是一个有助于绘制等轴形状的库。在我看来,它是本列表中最炫的库之一,因为它让我想起了 Monument Valley 游戏。
该库支持绘制多个形状、路径和复杂结构,如下面的示例:

9. UltraViewPager

UltraViewPager 是一个封装多种特性的 ViewPager ,主要是为多页面切换场景提供统一解决方案。

主要功能

  • 支持横向滑动/纵向滑动
  • 支持一屏内显示多页
  • 支持循环滚动
  • 支持定时滚动,计时器使用 Handler 实现
  • 支持设置 ViewPager 的最大宽高
  • setRatio 按比例显示 UltraviewPager
  • 内置 indicator ,只需简单设置几个属性就可以完成展示,支持圆点和 Icon;
  • 内置两种页面切换动效

该库有一个非常良好的文档。

10. InfiniteCards

可自定义动效的卡片切换视图,该库有助于实现卡片 UI ,然后用一个漂亮的动画切换它们。

参数

    • animType : 动效展示类型
    • front : 将点击的卡片切换到第一个

 

    •             switchPosition : 将点击的卡片和第一张卡片互换位置

 

                frontToLast : 将第一张卡片移到最后,后面的卡片往前移动一个

  • cardRatio : 卡片宽高比
  • animDuration : 卡片动效时间
  • animAddRemoveDelay : 卡片组切换时,添加与移出时,相邻卡片展示动效的间隔时间
  • animAddRemoveDuration : 卡片组切换时,添加与移出时,卡片动效时间

11. SlidingRootNav

我们可以认为这个库是像 DrawerLayout 的 ViewGroup,drawer(抽屉) 隐藏在内容视图之下,可以通过改变它们来显示 drawer。REAMDE 很全面,值得一看。

12. PasscodeView

这就是一个你可以键入密码的 view。但非常棒!

13. MusicWave

该库允许将声音表示为彩色梯度变化。

14. ShadowImageView

该库可帮助你为图片添加更有意义的阴影。根据 README ,它的作用是:

引用
可以根据图片内容变阴影颜色,更加细腻的阴影效果。

此外,它也非常易于使用。

15. PolygonDrawingUtil

这是一个高效的 Android 实用程序类,用于在 Canvas 上绘制常规的多边形。 我们可以指定:

  • 边数(≥3)
  • 中心点坐标
  • 外接圆半径(从中心到顶点的距离)
  • 圆角半径
  • 多边形旋转度
  • 填充/描边颜色

16. Tiny

这是本列表中的第二个框架。它负责图像压缩,功能相当强大的。还支持

引用
使用异步线程池来压缩图像,并且当压缩完成时,会将结果发送到主线程中。

 

17. ParticleTextView

该库提供了一个自定义的 view 组件,可以用彩色粒子组成指定的文字,并配合多种动画效果和配置属性,呈现出丰富的视觉效果。

18. CropIwa

这是一个高可配置的图像裁剪部件。该库基于模块化结构,因此它的可配置性非常强。你可以从 Github 上的 WiKi 了解如何进行配置。

19. Project Condom
『保险套』是一个超轻超薄的 Android 工具库,将它套在 Android 应用工程里裸露的 Context 上,再传入第三方 SDK(通常是其初始化方法),即可防止三方 SDK 中常见的损害用户体验的行为:

引用
在后台启动大量其它应用的进程(在三方推送 SDK 中较为常见),导致应用启动非常缓慢,启动后一段时间内出现严重的卡顿(在中低端机型上尤其明显)。 这是由于在这些 SDK 初始化阶段启动的其它应用中往往也存在三方 SDK 的类似行为,造成了进程启动的『链式反应』,在短时间内消耗大量的 CPU、文件 IO 及内存资源,使得当前应用所能得到的资源被大量挤占(甚至耗尽)。

20. AppMethodOrder

一个能让你了解所有函数调用顺序以及函数耗时的 Android 库(无需侵入式代码)。

引用
当项目代码量很大的时候,或者你作为一名新人要快速掌握代码的时候,给函数打上 log ,来了解代码执行逻辑,这种方式会显然成本太大,要改动项目编译运行,NO!太耗时;或者你想 debug 的方式来给你想关注的几个函数,来了解代码执行逻辑,NO!因为你肯定会漏掉函数;也许你可以固执的给你写的项目打满 log 说这样也行,但是你要知道你方法所调用的 jdk 的函数或者第三方 aar 或者 jar 再或者 android sdk 中的函数调用顺序你怎么办,还能打 log 吗?显然不行吧,来~这个项目给让可以让你以包名为过滤点过滤你想要知道所有函数调用顺序。

项目有详细的文件,你可以找到详细的手册了解如何使用它。

21. Android DebugKit

这是一个有趣的库。它允许你创建和使用特殊的悬停调试工具,以触发你在应用程序中定义的操作。这些操作可以在运行时明显的触发,因此可以在编写或测试手机屏幕反馈时间时使用。

该库使用 Builder 模式。 它很容易使用,在 README 中有一个其用法的示例。

22. Aesthetic

这是一个新的库,仍处于测试版,但它做了一件非常酷的事情 – 它通过 Rx 支持动态改变系统主题! 作者是这么描述的:

引用
一个快速和易于使用的即插即用的动态主题引擎。由 Rx 支持,适用于 Android 应用。

该库文档非常不错、内容全面,值得一读。

23. EasyCalendar

这是一个简单的自定义日历小插件。 主要功能包括:

  • 自定义布局的标题
  • 自定义布局的日期
  • 显示或隐藏日期的分隔符
  • 显示或隐藏溢出的日期
  • 监听日期视图的点击操作

该库的文档是全面且易于使用的。

24. SimpleRatingBar

该库提供两个评分栏:

  • BaseRatingBar – 没有任何动画
  • ScaleRatingBar – 具有渐进和缩放动画

你可以在下面的 gif 图中看到它们的效果:

25. Magellan

这个库被标榜为最简单的 Android 导航库,但你仍然需要自己判断它是否适合自己使用。主要特性:

  • 调用 goTo(screen) 方法就能简单实现导航,
  • 返回栈完全可控,
  • 自动处理过渡。

wiki 上有全面的说明。

26. ViewPagerAnimator

引用
ViewPagerAnimator 是一款面向 Android 的轻量级、功能强大的 ViewPager 动画库。 它被设计为在用户在 ViewPager 中的页面之间导航时显示任意动画,并且将精确地跟随他或她的手指的动作。虽然该库本身可能对某些人有用,但是发布这个库的主要目的就是展示一些完美 API 的细节之处,在使用即将到来的 Java 8 扩展时,这真的是走在前列的。本库还提供了 Java 7 和 Java 8 的示例项目。

它是由 Mark Allison 写的,你可以在他的 Styling Android 博客上获得更多的信息。

27. BlockCanaryEx

这是一个当你的应用程序被阻塞时,它可以方便在代码中找到阻塞的方法的库。它是基于 BlockCanary 的。

28. PaletteImageView

非常酷的一个库,可以动态的提取图片的主要颜色,并将颜色作为图片阴影的控件。

该项目文档较少,但我认为代码是不言自明的。

29. RecyclerRefreshLayout

这是一个打开相机快门的刷新动画。在我看来,真的值得研究,特别是在 README 中有一个关于如何实现这个效果的数学分析!

30. SlimAdapter

这是一种不使用 ViewHolder 来编写适配器的方法。主要功能包括:

  • 不包含 ViewHolders
  • 没有反射
  • 流畅和简单的 API
  • 支持多类型适配器
  • 支持 Kotlin
  • 支持简单的 DiffUtil

以上。希望你喜欢这篇文章! 如果还有在这个春天发布的其他伟大的库我没有提到,请在下面回复让我知道。 让我们一起维护这个列表!

英文原文:The 30 Coolest Android Libraries from Spring 2017
译者:Tocy, 边城, 王练, 总长, tsingkuo

20个帮助提高开发技巧的开源 Android App

学习的最好方法是阅读,对于开发人员也是如此。如果你想成为一个更好的开发人员,你必须阅读更多的代码。就这么简单。

书籍、博客、论坛在某种程度上都是不错的,但一些功能齐全的详解的开源项目依然是无可替代的,整个应用中包含的所有资源都展现在你面前。

所以你需要做的就是坐下来、喝杯咖啡、拜读下很棒的代码。在这篇文章中,我们提供了一些来自各种类别和风格的最好的开源 Android 应用程序,以满足你所有的学习和开发需求。

在深入代码之前,你可以直接从 Play Store 中试用这些应用程序。每个应用程序附带的难度级别将帮助你判断是否应该立即深入了解它或暂时放在一边。

LeafPic

(Github | Play Store | 难度: 入门级)

照片和视频画廊应用是安卓系统最常见的一种应用。有没有想过他们是如何制作的? LeafPic 是最好的开源画廊应用之一,你可以试着使用它学习。

它非常简单,容易理解,而且完全适合任何初学者开发人员。同时我发现这个应用最好的一件事情就是实现了动态主题。这是许多 Android 开发人员难以正确实现的。

Simple Calendar

(Github | Play Store | 难度: 入门级)

一个纯粹使用 Kotlin 开发的又简单又易用的日历应用。如果你想要学习 Kotlin ,这可能是最好的入门方法之一。

这个应用的目标非常简单,就是让你亲力亲为的通过开发 Android 应用来学习一门全新的语言。还有一件更酷的事情就是你能够学习到如何开发一个自定义的 Android 桌面工具。

Amaze File Manager

(Github | [ur=”https://play.google.com/store/apps/details?id=com.amaze.filemanager”l]Play Store[/url] | 难度: 中等)

一个非常常见的安卓文件管理器,你能在几乎所有安卓设备上使用它。

尽管开发一个文件管理器应用初步看起来很简单,但是实际上想要良好地运行在所有安卓平台和设备上是很难的。

从这个应用中你可以学到很多东西,尤其是怎么适当处理 SD 卡中的文件。但是我不建议你遵循这个项目中的编码标准,因为那是没法达到的。

Easy Sound Recorder

(Github | Play Store | 难度: 入门级)

一个简单的、容易使用的、漂亮的安卓录音应用。如果你想要学习一些关于安卓音频记录和操作方面的东西,那么这个项目最适合你了。

这个项目非常小(仅仅只有一个简单的 Activity ),同样也很容易理解。初学者可以从这个项目中学到原质化设计的基础知识。

MLManager

(Github | Play Store | 难度: 入门级)

MLManager 是一个简单直观的安卓设备应用管理器。如果你想要学习一些关于你设备上已安装应用的详细信息,或者是从这些应用中提取 APK 、卸载应用等,那么这个项目是你的理想选择。

这个应用中使用的编码标准很好,应该被遵守。而且遵循它的原质化设计原则,也能给你在设计干净简单的应用时提供一些好点子。

PhotoAffix

(Github | Play Store| 难度: 入门级)

这个一个设计很简洁的应用,它能够水平或垂直地拼接照片。是不是听起来很简单?确实如此。

对于任何想要学习安卓开发基础的初学者来说,这是一个理想的项目。它的编码标准是一流的,而且是我心中最好的开发实践。

通过这个项目你还可以学习实现一些简单有用的自定义视图,为你打好基础,以便将来设计更复杂的视图。

MovieGuide

(Github | 难度: 中等)

这个应用的功能相当简单,就是列出所有流行电影的预告片和评论,但是这个项目真正有趣的地方是它的实现方式。

这个应用展示了一些非常棒的开发项目,像 MVP、Bob 的 Clean Architecture、Dagger 2 实现的依赖注入,和 RxJava 。

这个应用程序非常简单,但实现方式非常棒,绝对值得一看。

AnExplorer

(Github | | Play Store | 难度级别:中等)

另一个简单、轻量级和简约的文件管理器,专为手机和平板电脑设计。

从这个项目中可以学到很多事情,比如文件处理、root权限管理、装载程序、自定义视图等。 它内部代码设计良好,你不需要花费太多时间来掌握其内部发生的事情。

Minimal ToDo

(Github | Play Store | 难度: 入门级)

对于一个初学者,这是一个可以用来试水的项目,它非常简单而且很酷。 通过这个项目您将有机会了解安卓开发的大部分基础。

作为初学者的良好起点,这个应用程序的设计是非常得体的。 但是不要遵循它的编码标准和打包结构,因为它们不符合标准,应该避免使用。

Timber

(Github | Play Store | 难易程度: 高级)

Timber 是一款设计精美、功能齐全的 Android 音乐播放器。如果你想要建立自己的音乐播放器或任何音乐相关的应用程序,那么这是你需要查看的项目。

该项目规模相当大,正处于密集开发状态。对于初学者来说,掌握所有代码可能会有点困难,但对于任何中级或高级 Android 开发人员来说,这应该是非常有趣的。

AnotherMonitor

(Github | Play Store | 难度等级: 中等)

如果你想了解如何监控 Android 进程、内存使用情况、CPU 使用情况以及与之相关的部分,那么这就是开始学习的完美项目。

它的规模比较小并且很容易理解,但编码规范、之后的架构和总体设计并不符合规范,不宜参考。

InstaMaterial

(Github | 难度: 入门级)

如果您正在寻找一个项目来学习或提高您的材料设计语言(Material Design)技能,那么这个很合适。 该项目试图用材料设计语言复制 Instagram 应用程序的部分内容。

在这个应用中使用了大量的材料设计语言元素、动画和过渡效果,您可以在自己的项目中学习和实现。

它非常简单,易于理解,非常适合任何想要提高设计技能的 Android 开发人员。

CoCoin

(Github | 难度级别: 简单)

CoCoin 是一个详尽的个人财务和记账解决方案,运行在一个干净漂亮的用户界面之上。

如果你想了解如何正确管理大量的用户数据,并从这些数据中绘制漂亮的图表,制作一些很酷的自定义视图,那么这个开源仓库就是为你而设的。

OmniNotes

(Github | Play Store | 难度: 中等)

如果您正在建立一个功能齐全的,类似 Evernote 笔记本的 Android 应用程序,那么用它开始是个正确的决定。

该项目有相当多的功能,如共享和搜索笔记、附加图像、视频、音频、草图、添加提醒功能等等。

您可以从这个项目中学到的另一件很酷的事情是将您的应用程序与 Google Now 无缝集成。

Clip Stack

(Github | Play Store | 难度级别: 入门级)

Android 上一个简单、干净和优美的剪贴板管理器应用。该项目相当小,同时简单易懂。

但是,本项目中所使用的包结构、架构、命名约定和编码规范并不符合标准。它是按照一个非常简单和初学者友好的方式来构建的。

Super Clean Master

(Github | 难度级别: 高级)

如果你曾使用过 Android 设备,那么肯定需要从设备中清理一些垃圾数据。 Clean master 是全球最受欢迎的选择之一。

这个应用程序,顾名思义,尝试以非常干净和优雅的方式模拟 Clean Master 的大部分功能。但整体项目有点复杂,可能需要花一点时间来理解所有的代码。

Travel Mate

(Github | 难度: 中等)

如果您正在寻找构建基于旅行的应用程序,并且需要大量依靠位置和地图,那么这个项目可能是最好的起点。

该应用程序的设计和代码质量尚未达到标准,但总体设计非常好,有很多东西值得一些初学者甚至中级 Android 开发人员来学习。

KISS

(Github | Play Store | 难度:中等)

一个简单、超快速、轻便的 Android 应用。可以从这个项目中学到的几个很酷很漂亮的功能。

所以如果你想制作 Android 的启动器,这可能是最好的开始。该应用程序相当小,并且该项目也是相当简单的。

Turbo Editor

(Github | Play Store | 难度: 中等)

一个简单但功能强大的 Android 编辑器应用程序。 您当然也可以使用此编辑器编写代码,并支持不同编程语言的语法高亮。

我甚至尝试打开大文本文件,大多数应用都崩溃或打开失败了,这个应用则相当优雅地处理了它们。 从这个项目,您将有机会学习制作一个非常可靠和强大的文本(或代码)编辑器应用。

Wally

(Github| 难易程度: 入门级)

Android 上快速、简单和高效的壁纸应用程序。这个项目有很多可学习的东西,尤其是对于初学者。

在应用中使用的架构相当不错,这使得该应用很容易扩展和维护。这个应用的目标非常简单,但是为实现这个目标而采取的方法是值得借鉴的。

Pedometer

(Github | 难易程度: 入门级)

一个简单、轻量级的计步器应用程序,它使用硬件传感器来计算步数,这对电池性能几乎没有什么影响。

这是一个开始学习步数跟踪的好项目,但是编码标准和设计不够好,不建议参考。

我已经分享了几种来自各种渠道的开源 Android 应用程序,以满足几乎所有人的需求。 还有适合各类 Android 开发者的应用程序,难度从入门级到更专业的。

我希望你会发现这些开源项目真的很有用。 本文最初发布在TechBeacon上。

译文:20+ Awesome Open-Source Android Apps To Boost Your Development Skills

JetBrains IntelliJ IDEA Ultimate 2006/ 2017.1.1 / 2017.1.2 Crack

How to crack

1.Install the latest version of IntelliJ IDEA (v2016 v2017.x.x)
2.Start it
3.When you have to enter the license, change to [License server]
In the Server URL input field enter: http://idea.strongd.net/ . For older Servers, check out the bottom of the page, they are all listed

4.Click on [Ok] and everything should

DBA不可不知的操作系统内核参数

背景

操作系统为了适应更多的硬件环境,许多初始的设置值,宽容度都很高。

如果不经调整,这些值可能无法适应HPC,或者硬件稍好些的环境。

无法发挥更好的硬件性能,甚至可能影响某些应用软件的使用,特别是数据库。

数据库关心的OS内核参数

512GB 内存为例

1.

参数

fs.aio-max-nr  

支持系统

CentOS 6, 7       

参数解释

aio-nr & aio-max-nr:    
.  
aio-nr is the running total of the number of events specified on the    
io_setup system call for all currently active aio contexts.    
.  
If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN.    
.  
Note that raising aio-max-nr does not result in the pre-allocation or re-sizing    
of any kernel data structures.    
.  
aio-nr & aio-max-nr:    
.  
aio-nr shows the current system-wide number of asynchronous io requests.    
.  
aio-max-nr allows you to change the maximum value aio-nr can grow to.    

推荐设置

fs.aio-max-nr = 1xxxxxx  
.  
PostgreSQL, Greenplum 均未使用io_setup创建aio contexts. 无需设置。    
如果Oracle数据库,要使用aio的话,需要设置它。    
设置它也没什么坏处,如果将来需要适应异步IO,可以不需要重新修改这个设置。   

2.

参数

fs.file-max  

支持系统

CentOS 6, 7       

参数解释

file-max & file-nr:    
.  
The value in file-max denotes the maximum number of file handles that the Linux kernel will allocate.   
.  
When you get lots of error messages about running out of file handles,   
you might want to increase this limit.    
.  
Historically, the kernel was able to allocate file handles dynamically,   
but not to free them again.     
.  
The three values in file-nr denote :      
the number of allocated file handles ,     
the number of allocated but unused file handles ,     
the maximum number of file handles.     
.  
Linux 2.6 always reports 0 as the number of free    
file handles -- this is not an error, it just means that the    
number of allocated file handles exactly matches the number of    
used file handles.    
.  
Attempts to allocate more file descriptors than file-max are reported with printk,   
look for "VFS: file-max limit <number> reached".    

推荐设置

fs.file-max = 7xxxxxxx  
.  
PostgreSQL 有一套自己管理的VFS,真正打开的FD与内核管理的文件打开关闭有一套映射的机制,所以真实情况不需要使用那么多的file handlers。     
max_files_per_process 参数。     
假设1GB内存支撑100个连接,每个连接打开1000个文件,那么一个PG实例需要打开10万个文件,一台机器按512G内存来算可以跑500个PG实例,则需要5000万个file handler。     
以上设置绰绰有余。     

3.

参数

kernel.core_pattern  

支持系统

CentOS 6, 7       

参数解释

core_pattern:    
.  
core_pattern is used to specify a core dumpfile pattern name.    
. max length 128 characters; default value is "core"    
. core_pattern is used as a pattern template for the output filename;    
  certain string patterns (beginning with '%') are substituted with    
  their actual values.    
. backward compatibility with core_uses_pid:    
        If core_pattern does not include "%p" (default does not)    
        and core_uses_pid is set, then .PID will be appended to    
        the filename.    
. corename format specifiers:    
        %<NUL>  '%' is dropped    
        %%      output one '%'    
        %p      pid    
        %P      global pid (init PID namespace)    
        %i      tid    
        %I      global tid (init PID namespace)    
        %u      uid    
        %g      gid    
        %d      dump mode, matches PR_SET_DUMPABLE and    
                /proc/sys/fs/suid_dumpable    
        %s      signal number    
        %t      UNIX time of dump    
        %h      hostname    
        %e      executable filename (may be shortened)    
        %E      executable path    
        %<OTHER> both are dropped    
. If the first character of the pattern is a '|', the kernel will treat    
  the rest of the pattern as a command to run.  The core dump will be    
  written to the standard input of that program instead of to a file.    

推荐设置

kernel.core_pattern = /xxx/core_%e_%u_%t_%s.%p    
.  
这个目录要777的权限,如果它是个软链,则真实目录需要777的权限  
mkdir /xxx  
chmod 777 /xxx  
留足够的空间  

4.

参数

kernel.sem   

支持系统

CentOS 6, 7       

参数解释

kernel.sem = 4096 2147483647 2147483646 512000    
.  
4096 每组多少信号量 (>=17, PostgreSQL 每16个进程一组, 每组需要17个信号量) ,     
2147483647 总共多少信号量 (2^31-1 , 且大于4096*512000 ) ,     
2147483646 每个semop()调用支持多少操作 (2^31-1),     
512000 多少组信号量 (假设每GB支持100个连接, 512GB支持51200个连接, 加上其他进程, > 51200*2/16 绰绰有余)     
.  
# sysctl -w kernel.sem="4096 2147483647 2147483646 512000"    
.  
# ipcs -s -l    
  ------ Semaphore Limits --------    
max number of arrays = 512000    
max semaphores per array = 4096    
max semaphores system wide = 2147483647    
max ops per semop call = 2147483646    
semaphore max value = 32767    

推荐设置

kernel.sem = 4096 2147483647 2147483646 512000    
.  
4096可能能够适合更多的场景, 所以大点无妨,关键是512000 arrays也够了。    

5.

参数

kernel.shmall = 107374182    
kernel.shmmax = 274877906944    
kernel.shmmni = 819200    

支持系统

CentOS 6, 7        

参数解释

假设主机内存 512GB    
.  
shmmax 单个共享内存段最大 256GB (主机内存的一半,单位字节)      
shmall 所有共享内存段加起来最大 (主机内存的80%,单位PAGE)      
shmmni 一共允许创建819200个共享内存段 (每个数据库启动需要2个共享内存段。  将来允许动态创建共享内存段,可能需求量更大)     
.  
# getconf PAGE_SIZE    
4096    

推荐设置

kernel.shmall = 107374182    
kernel.shmmax = 274877906944    
kernel.shmmni = 819200    
.  
9.2以及以前的版本,数据库启动时,对共享内存段的内存需求非常大,需要考虑以下几点  
Connections:	(1800 + 270 * max_locks_per_transaction) * max_connections  
Autovacuum workers:	(1800 + 270 * max_locks_per_transaction) * autovacuum_max_workers  
Prepared transactions:	(770 + 270 * max_locks_per_transaction) * max_prepared_transactions  
Shared disk buffers:	(block_size + 208) * shared_buffers  
WAL buffers:	(wal_block_size + 8) * wal_buffers  
Fixed space requirements:	770 kB  
.  
以上建议参数根据9.2以前的版本设置,后期的版本同样适用。  

6.

参数

net.core.netdev_max_backlog  

支持系统

CentOS 6, 7     

参数解释

netdev_max_backlog    
  ------------------    
Maximum number  of  packets,  queued  on  the  INPUT  side,    
when the interface receives packets faster than kernel can process them.    

推荐设置

net.core.netdev_max_backlog=1xxxx    
.  
INPUT链表越长,处理耗费越大,如果用了iptables管理的话,需要加大这个值。    

7.

参数

net.core.rmem_default  
net.core.rmem_max  
net.core.wmem_default  
net.core.wmem_max  

支持系统

CentOS 6, 7     

参数解释

rmem_default    
  ------------    
The default setting of the socket receive buffer in bytes.    
.  
rmem_max    
  --------    
The maximum receive socket buffer size in bytes.    
.  
wmem_default    
  ------------    
The default setting (in bytes) of the socket send buffer.    
.  
wmem_max    
  --------    
The maximum send socket buffer size in bytes.    

推荐设置

net.core.rmem_default = 262144    
net.core.rmem_max = 4194304    
net.core.wmem_default = 262144    
net.core.wmem_max = 4194304    

8.

参数

net.core.somaxconn   

支持系统

CentOS 6, 7        

参数解释

somaxconn - INTEGER    
        Limit of socket listen() backlog, known in userspace as SOMAXCONN.    
        Defaults to 128.    
	See also tcp_max_syn_backlog for additional tuning for TCP sockets.    

推荐设置

net.core.somaxconn=4xxx    

9.

参数

net.ipv4.tcp_max_syn_backlog  

支持系统

CentOS 6, 7         

参数解释

tcp_max_syn_backlog - INTEGER    
        Maximal number of remembered connection requests, which have not    
        received an acknowledgment from connecting client.    
        The minimal value is 128 for low memory machines, and it will    
        increase in proportion to the memory of machine.    
        If server suffers from overload, try increasing this number.    

推荐设置

net.ipv4.tcp_max_syn_backlog=4xxx    
pgpool-II 使用了这个值,用于将超过num_init_child以外的连接queue。     
所以这个值决定了有多少连接可以在队列里面等待。    

10.

参数

net.ipv4.tcp_keepalive_intvl=20    
net.ipv4.tcp_keepalive_probes=3    
net.ipv4.tcp_keepalive_time=60     

支持系统

CentOS 6, 7        

参数解释

tcp_keepalive_time - INTEGER    
        How often TCP sends out keepalive messages when keepalive is enabled.    
        Default: 2hours.    
.  
tcp_keepalive_probes - INTEGER    
        How many keepalive probes TCP sends out, until it decides that the    
        connection is broken. Default value: 9.    
.  
tcp_keepalive_intvl - INTEGER    
        How frequently the probes are send out. Multiplied by    
        tcp_keepalive_probes it is time to kill not responding connection,    
        after probes started. Default value: 75sec i.e. connection    
        will be aborted after ~11 minutes of retries.    

推荐设置

net.ipv4.tcp_keepalive_intvl=20    
net.ipv4.tcp_keepalive_probes=3    
net.ipv4.tcp_keepalive_time=60    
.  
连接空闲60秒后, 每隔20秒发心跳包, 尝试3次心跳包没有响应,关闭连接。 从开始空闲,到关闭连接总共历时120秒。    

11.

参数

net.ipv4.tcp_mem=8388608 12582912 16777216    

支持系统

CentOS 6, 7    

参数解释

tcp_mem - vector of 3 INTEGERs: min, pressure, max    
单位 page    
        min: below this number of pages TCP is not bothered about its    
        memory appetite.    
.  
        pressure: when amount of memory allocated by TCP exceeds this number    
        of pages, TCP moderates its memory consumption and enters memory    
        pressure mode, which is exited when memory consumption falls    
        under "min".    
.  
        max: number of pages allowed for queueing by all TCP sockets.    
.  
        Defaults are calculated at boot time from amount of available    
        memory.    
64GB 内存,自动计算的值是这样的    
net.ipv4.tcp_mem = 1539615      2052821 3079230    
.  
512GB 内存,自动计算得到的值是这样的    
net.ipv4.tcp_mem = 49621632     66162176        99243264    
.  
这个参数让操作系统启动时自动计算,问题也不大  

推荐设置

net.ipv4.tcp_mem=8388608 12582912 16777216    
.  
这个参数让操作系统启动时自动计算,问题也不大  

12.

参数

net.ipv4.tcp_fin_timeout  

支持系统

CentOS 6, 7        

参数解释

tcp_fin_timeout - INTEGER    
        The length of time an orphaned (no longer referenced by any    
        application) connection will remain in the FIN_WAIT_2 state    
        before it is aborted at the local end.  While a perfectly    
        valid "receive only" state for an un-orphaned connection, an    
        orphaned connection in FIN_WAIT_2 state could otherwise wait    
        forever for the remote to close its end of the connection.    
        Cf. tcp_max_orphans    
        Default: 60 seconds    

推荐设置

net.ipv4.tcp_fin_timeout=5    
.  
加快僵尸连接回收速度   

13.

参数

net.ipv4.tcp_synack_retries  

支持系统

CentOS 6, 7         

参数解释

tcp_synack_retries - INTEGER    
        Number of times SYNACKs for a passive TCP connection attempt will    
        be retransmitted. Should not be higher than 255. Default value    
        is 5, which corresponds to 31seconds till the last retransmission    
        with the current initial RTO of 1second. With this the final timeout    
        for a passive TCP connection will happen after 63seconds.    

推荐设置

net.ipv4.tcp_synack_retries=2    
.  
缩短tcp syncack超时时间  

14.

参数

net.ipv4.tcp_syncookies  

支持系统

CentOS 6, 7         

参数解释

tcp_syncookies - BOOLEAN    
        Only valid when the kernel was compiled with CONFIG_SYN_COOKIES    
        Send out syncookies when the syn backlog queue of a socket    
        overflows. This is to prevent against the common 'SYN flood attack'    
        Default: 1    
.  
        Note, that syncookies is fallback facility.    
        It MUST NOT be used to help highly loaded servers to stand    
        against legal connection rate. If you see SYN flood warnings    
        in your logs, but investigation shows that they occur    
        because of overload with legal connections, you should tune    
        another parameters until this warning disappear.    
        See: tcp_max_syn_backlog, tcp_synack_retries, tcp_abort_on_overflow.    
.  
        syncookies seriously violate TCP protocol, do not allow    
        to use TCP extensions, can result in serious degradation    
        of some services (f.e. SMTP relaying), visible not by you,    
        but your clients and relays, contacting you. While you see    
        SYN flood warnings in logs not being really flooded, your server    
        is seriously misconfigured.    
.  
        If you want to test which effects syncookies have to your    
        network connections you can set this knob to 2 to enable    
        unconditionally generation of syncookies.    

推荐设置

net.ipv4.tcp_syncookies=1    
.  
防止syn flood攻击   

15.

参数

net.ipv4.tcp_timestamps  

支持系统

CentOS 6, 7         

参数解释

tcp_timestamps - BOOLEAN    
        Enable timestamps as defined in RFC1323.    

推荐设置

net.ipv4.tcp_timestamps=1    
.  
tcp_timestamps 是 tcp 协议中的一个扩展项,通过时间戳的方式来检测过来的包以防止 PAWS(Protect Against Wrapped  Sequence numbers),可以提高 tcp 的性能。  

16.

参数

net.ipv4.tcp_tw_recycle  
net.ipv4.tcp_tw_reuse  
net.ipv4.tcp_max_tw_buckets  

支持系统

CentOS 6, 7         

参数解释

tcp_tw_recycle - BOOLEAN    
        Enable fast recycling TIME-WAIT sockets. Default value is 0.    
        It should not be changed without advice/request of technical    
        experts.    
.  
tcp_tw_reuse - BOOLEAN    
        Allow to reuse TIME-WAIT sockets for new connections when it is    
        safe from protocol viewpoint. Default value is 0.    
        It should not be changed without advice/request of technical    
        experts.    
.  
tcp_max_tw_buckets - INTEGER  
        Maximal number of timewait sockets held by system simultaneously.  
        If this number is exceeded time-wait socket is immediately destroyed  
        and warning is printed.   
	This limit exists only to prevent simple DoS attacks,   
	you _must_ not lower the limit artificially,   
        but rather increase it (probably, after increasing installed memory),    
        if network conditions require more than default value.   

推荐设置

net.ipv4.tcp_tw_recycle=0    
net.ipv4.tcp_tw_reuse=1    
net.ipv4.tcp_max_tw_buckets = 2xxxxx    
.  
net.ipv4.tcp_tw_recycle和net.ipv4.tcp_timestamps不建议同时开启    

17.

参数

net.ipv4.tcp_rmem  
net.ipv4.tcp_wmem  

支持系统

CentOS 6, 7         

参数解释

tcp_wmem - vector of 3 INTEGERs: min, default, max    
        min: Amount of memory reserved for send buffers for TCP sockets.    
        Each TCP socket has rights to use it due to fact of its birth.    
        Default: 1 page    
.  
        default: initial size of send buffer used by TCP sockets.  This    
        value overrides net.core.wmem_default used by other protocols.    
        It is usually lower than net.core.wmem_default.    
        Default: 16K    
.  
        max: Maximal amount of memory allowed for automatically tuned    
        send buffers for TCP sockets. This value does not override    
        net.core.wmem_max.  Calling setsockopt() with SO_SNDBUF disables    
        automatic tuning of that socket's send buffer size, in which case    
        this value is ignored.    
        Default: between 64K and 4MB, depending on RAM size.    
.  
tcp_rmem - vector of 3 INTEGERs: min, default, max    
        min: Minimal size of receive buffer used by TCP sockets.    
        It is guaranteed to each TCP socket, even under moderate memory    
        pressure.    
        Default: 1 page    
.  
        default: initial size of receive buffer used by TCP sockets.    
        This value overrides net.core.rmem_default used by other protocols.    
        Default: 87380 bytes. This value results in window of 65535 with    
        default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit    
        less for default tcp_app_win. See below about these variables.    
.  
        max: maximal size of receive buffer allowed for automatically    
        selected receiver buffers for TCP socket. This value does not override    
        net.core.rmem_max.  Calling setsockopt() with SO_RCVBUF disables    
        automatic tuning of that socket's receive buffer size, in which    
        case this value is ignored.    
        Default: between 87380B and 6MB, depending on RAM size.    

推荐设置

net.ipv4.tcp_rmem=8192 87380 16777216    
net.ipv4.tcp_wmem=8192 65536 16777216    
.  
许多数据库的推荐设置,提高网络性能  

18.

参数

net.nf_conntrack_max  
net.netfilter.nf_conntrack_max  

支持系统

CentOS 6    

参数解释

nf_conntrack_max - INTEGER    
        Size of connection tracking table.    
	Default value is nf_conntrack_buckets value * 4.    

推荐设置

net.nf_conntrack_max=1xxxxxx    
net.netfilter.nf_conntrack_max=1xxxxxx    

19.

参数

vm.dirty_background_bytes   
vm.dirty_expire_centisecs   
vm.dirty_ratio   
vm.dirty_writeback_centisecs   

支持系统

CentOS 6, 7        

参数解释

==============================================================    
.  
dirty_background_bytes    
.  
Contains the amount of dirty memory at which the background kernel    
flusher threads will start writeback.    
.  
Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only    
one of them may be specified at a time. When one sysctl is written it is    
immediately taken into account to evaluate the dirty memory limits and the    
other appears as 0 when read.    
.  
==============================================================    
.  
dirty_background_ratio    
.  
Contains, as a percentage of total system memory, the number of pages at which    
the background kernel flusher threads will start writing out dirty data.    
.  
==============================================================    
.  
dirty_bytes    
.  
Contains the amount of dirty memory at which a process generating disk writes    
will itself start writeback.    
.  
Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be    
specified at a time. When one sysctl is written it is immediately taken into    
account to evaluate the dirty memory limits and the other appears as 0 when    
read.    
.  
Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any    
value lower than this limit will be ignored and the old configuration will be    
retained.    
.  
==============================================================    
.  
dirty_expire_centisecs    
.  
This tunable is used to define when dirty data is old enough to be eligible    
for writeout by the kernel flusher threads.  It is expressed in 100'ths    
of a second.  Data which has been dirty in-memory for longer than this    
interval will be written out next time a flusher thread wakes up.    
.  
==============================================================    
.  
dirty_ratio    
.  
Contains, as a percentage of total system memory, the number of pages at which    
a process which is generating disk writes will itself start writing out dirty    
data.    
.  
==============================================================    
.  
dirty_writeback_centisecs    
.  
The kernel flusher threads will periodically wake up and write `old' data    
out to disk.  This tunable expresses the interval between those wakeups, in    
100'ths of a second.    
.  
Setting this to zero disables periodic writeback altogether.    
.  
==============================================================    

推荐设置

vm.dirty_background_bytes = 4096000000    
vm.dirty_expire_centisecs = 6000    
vm.dirty_ratio = 80    
vm.dirty_writeback_centisecs = 50    
.  
减少数据库进程刷脏页的频率,dirty_background_bytes根据实际IOPS能力以及内存大小设置    

20.

参数

vm.extra_free_kbytes  

支持系统

CentOS 6    

参数解释

extra_free_kbytes    
.  
This parameter tells the VM to keep extra free memory   
between the threshold where background reclaim (kswapd) kicks in,   
and the threshold where direct reclaim (by allocating processes) kicks in.    
.  
This is useful for workloads that require low latency memory allocations    
and have a bounded burstiness in memory allocations,   
for example a realtime application that receives and transmits network traffic    
(causing in-kernel memory allocations) with a maximum total message burst    
size of 200MB may need 200MB of extra free memory to avoid direct reclaim    
related latencies.    
.  
目标是尽量让后台进程回收内存,比用户进程提早多少kbytes回收,因此用户进程可以快速分配内存。    

推荐设置

vm.extra_free_kbytes=4xxxxxx    

21.

参数

vm.min_free_kbytes  

支持系统

CentOS 6, 7         

参数解释

min_free_kbytes:    
.  
This is used to force the Linux VM to keep a minimum number    
of kilobytes free.  The VM uses this number to compute a    
watermark[WMARK_MIN] value for each lowmem zone in the system.    
Each lowmem zone gets a number of reserved free pages based    
proportionally on its size.    
.  
Some minimal amount of memory is needed to satisfy PF_MEMALLOC    
allocations; if you set this to lower than 1024KB, your system will    
become subtly broken, and prone to deadlock under high loads.    
.  
Setting this too high will OOM your machine instantly.    

推荐设置

vm.min_free_kbytes = 2xxxxxx    
.  
防止在高负载时系统无响应,减少内存分配死锁概率。    

22.

参数

vm.mmap_min_addr  

支持系统

CentOS 6, 7       

参数解释

mmap_min_addr    
.  
This file indicates the amount of address space  which a user process will    
be restricted from mmapping.  Since kernel null dereference bugs could    
accidentally operate based on the information in the first couple of pages    
of memory userspace processes should not be allowed to write to them.  By    
default this value is set to 0 and no protections will be enforced by the    
security module.  Setting this value to something like 64k will allow the    
vast majority of applications to work correctly and provide defense in depth    
against future potential kernel bugs.    

推荐设置

vm.mmap_min_addr=6xxxx    
.  
防止内核隐藏的BUG导致的问题  

23.

参数

vm.overcommit_memory   
vm.overcommit_ratio   

支持系统

CentOS 6, 7         

参数解释

==============================================================    
.  
overcommit_kbytes:    
.  
When overcommit_memory is set to 2, the committed address space is not    
permitted to exceed swap plus this amount of physical RAM. See below.    
.  
Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one    
of them may be specified at a time. Setting one disables the other (which    
then appears as 0 when read).    
.  
==============================================================    
.  
overcommit_memory:    
.  
This value contains a flag that enables memory overcommitment.    
.  
When this flag is 0,   
the kernel attempts to estimate the amount    
of free memory left when userspace requests more memory.    
.  
When this flag is 1,   
the kernel pretends there is always enough memory until it actually runs out.    
.  
When this flag is 2,   
the kernel uses a "never overcommit"    
policy that attempts to prevent any overcommit of memory.    
Note that user_reserve_kbytes affects this policy.    
.  
This feature can be very useful because there are a lot of    
programs that malloc() huge amounts of memory "just-in-case"    
and don't use much of it.    
.  
The default value is 0.    
.  
See Documentation/vm/overcommit-accounting and    
security/commoncap.c::cap_vm_enough_memory() for more information.    
.  
==============================================================    
.  
overcommit_ratio:    
.  
When overcommit_memory is set to 2,   
the committed address space is not permitted to exceed   
      swap + this percentage of physical RAM.    
See above.    
.  
==============================================================    

推荐设置

vm.overcommit_memory = 0    
vm.overcommit_ratio = 90    
.  
vm.overcommit_memory = 0 时 vm.overcommit_ratio可以不设置   

24.

参数

vm.swappiness   

支持系统

CentOS 6, 7         

参数解释

swappiness    
.  
This control is used to define how aggressive the kernel will swap    
memory pages.    
Higher values will increase agressiveness, lower values    
decrease the amount of swap.    
.  
The default value is 60.    

推荐设置

vm.swappiness = 0    

25.

参数

vm.zone_reclaim_mode   

支持系统

CentOS 6, 7         

参数解释

zone_reclaim_mode:    
.  
Zone_reclaim_mode allows someone to set more or less aggressive approaches to    
reclaim memory when a zone runs out of memory. If it is set to zero then no    
zone reclaim occurs. Allocations will be satisfied from other zones / nodes    
in the system.    
.  
This is value ORed together of    
.  
1       = Zone reclaim on    
2       = Zone reclaim writes dirty pages out    
4       = Zone reclaim swaps pages    
.  
zone_reclaim_mode is disabled by default.  For file servers or workloads    
that benefit from having their data cached, zone_reclaim_mode should be    
left disabled as the caching effect is likely to be more important than    
data locality.    
.  
zone_reclaim may be enabled if it's known that the workload is partitioned    
such that each partition fits within a NUMA node and that accessing remote    
memory would cause a measurable performance reduction.  The page allocator    
will then reclaim easily reusable pages (those page cache pages that are    
currently not used) before allocating off node pages.    
.  
Allowing zone reclaim to write out pages stops processes that are    
writing large amounts of data from dirtying pages on other nodes. Zone    
reclaim will write out dirty pages if a zone fills up and so effectively    
throttle the process. This may decrease the performance of a single process    
since it cannot use all of system memory to buffer the outgoing writes    
anymore but it preserve the memory on other nodes so that the performance    
of other processes running on other nodes will not be affected.    
.  
Allowing regular swap effectively restricts allocations to the local    
node unless explicitly overridden by memory policies or cpuset    
configurations.    

推荐设置

vm.zone_reclaim_mode=0    
.  
不使用NUMA  

26.

参数

net.ipv4.ip_local_port_range  

支持系统

CentOS 6, 7         

参数解释

ip_local_port_range - 2 INTEGERS  
        Defines the local port range that is used by TCP and UDP to  
        choose the local port. The first number is the first, the  
        second the last local port number. The default values are  
        32768 and 61000 respectively.  
.  
ip_local_reserved_ports - list of comma separated ranges  
        Specify the ports which are reserved for known third-party  
        applications. These ports will not be used by automatic port  
        assignments (e.g. when calling connect() or bind() with port  
        number 0). Explicit port allocation behavior is unchanged.  
.  
        The format used for both input and output is a comma separated  
        list of ranges (e.g. "1,2-4,10-10" for ports 1, 2, 3, 4 and  
        10). Writing to the file will clear all previously reserved  
        ports and update the current list with the one given in the  
        input.  
.  
        Note that ip_local_port_range and ip_local_reserved_ports  
        settings are independent and both are considered by the kernel  
        when determining which ports are available for automatic port  
        assignments.  
.  
        You can reserve ports which are not in the current  
        ip_local_port_range, e.g.:  
.  
        $ cat /proc/sys/net/ipv4/ip_local_port_range  
        32000   61000  
        $ cat /proc/sys/net/ipv4/ip_local_reserved_ports  
        8080,9148  
.  
        although this is redundant. However such a setting is useful  
        if later the port range is changed to a value that will  
        include the reserved ports.  
.  
        Default: Empty  

推荐设置

net.ipv4.ip_local_port_range=40000 65535    
.  
限制本地动态端口分配范围,防止占用监听端口。  

27.

参数

  vm.nr_hugepages  

支持系统

CentOS 6, 7  

参数解释

==============================================================  
nr_hugepages  
Change the minimum size of the hugepage pool.  
See Documentation/vm/hugetlbpage.txt  
==============================================================  
nr_overcommit_hugepages  
Change the maximum size of the hugepage pool. The maximum is  
nr_hugepages + nr_overcommit_hugepages.  
See Documentation/vm/hugetlbpage.txt  
.  
The output of "cat /proc/meminfo" will include lines like:  
......  
HugePages_Total: vvv  
HugePages_Free:  www  
HugePages_Rsvd:  xxx  
HugePages_Surp:  yyy  
Hugepagesize:    zzz kB  
.  
where:  
HugePages_Total is the size of the pool of huge pages.  
HugePages_Free  is the number of huge pages in the pool that are not yet  
                allocated.  
HugePages_Rsvd  is short for "reserved," and is the number of huge pages for  
                which a commitment to allocate from the pool has been made,  
                but no allocation has yet been made.  Reserved huge pages  
                guarantee that an application will be able to allocate a  
                huge page from the pool of huge pages at fault time.  
HugePages_Surp  is short for "surplus," and is the number of huge pages in  
                the pool above the value in /proc/sys/vm/nr_hugepages. The  
                maximum number of surplus huge pages is controlled by  
                /proc/sys/vm/nr_overcommit_hugepages.  
.  
/proc/filesystems should also show a filesystem of type "hugetlbfs" configured  
in the kernel.  
.  
/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge  
pages in the kernel's huge page pool.  "Persistent" huge pages will be  
returned to the huge page pool when freed by a task.  A user with root  
privileges can dynamically allocate more or free some persistent huge pages  
by increasing or decreasing the value of 'nr_hugepages'.  

推荐设置

如果要使用PostgreSQL的huge page,建议设置它。    
大于数据库需要的共享内存即可。    

28.

参数

  fs.nr_open

支持系统

CentOS 6, 7

参数解释

nr_open:

This denotes the maximum number of file-handles a process can
allocate. Default value is 1024*1024 (1048576) which should be
enough for most machines. Actual limit depends on RLIMIT_NOFILE
resource limit.

它还影响security/limits.conf 的文件句柄限制,单个进程的打开句柄不能大于fs.nr_open,所以要加大文件句柄限制,首先要加大nr_open

推荐设置

对于有很多对象(表、视图、索引、序列、物化视图等)的PostgreSQL数据库,建议设置为2000万,
例如fs.nr_open=20480000

数据库关心的资源限制

1. 通过/etc/security/limits.conf设置,或者ulimit设置

2. 通过/proc/$pid/limits查看当前进程的设置

#        - core - limits the core file size (KB)  
#        - memlock - max locked-in-memory address space (KB)  
#        - nofile - max number of open files  建议设置为1000万 , 但是必须设置sysctl, fs.nr_open大于它,否则会导致系统无法登陆。
#        - nproc - max number of processes  
以上四个是非常关心的配置  
....  
#        - data - max data size (KB)  
#        - fsize - maximum filesize (KB)  
#        - rss - max resident set size (KB)  
#        - stack - max stack size (KB)  
#        - cpu - max CPU time (MIN)  
#        - as - address space limit (KB)  
#        - maxlogins - max number of logins for this user  
#        - maxsyslogins - max number of logins on the system  
#        - priority - the priority to run user process with  
#        - locks - max number of file locks the user can hold  
#        - sigpending - max number of pending signals  
#        - msgqueue - max memory used by POSIX message queues (bytes)  
#        - nice - max nice priority allowed to raise to values: [-20, 19]  
#        - rtprio - max realtime priority  

数据库关心的IO调度规则

1. 目前操作系统支持的IO调度策略包括cfq, deadline, noop 等。

/kernel-doc-xxx/Documentation/block  
-r--r--r-- 1 root root   674 Apr  8 16:33 00-INDEX  
-r--r--r-- 1 root root 55006 Apr  8 16:33 biodoc.txt  
-r--r--r-- 1 root root   618 Apr  8 16:33 capability.txt  
-r--r--r-- 1 root root 12791 Apr  8 16:33 cfq-iosched.txt  
-r--r--r-- 1 root root 13815 Apr  8 16:33 data-integrity.txt  
-r--r--r-- 1 root root  2841 Apr  8 16:33 deadline-iosched.txt  
-r--r--r-- 1 root root  4713 Apr  8 16:33 ioprio.txt  
-r--r--r-- 1 root root  2535 Apr  8 16:33 null_blk.txt  
-r--r--r-- 1 root root  4896 Apr  8 16:33 queue-sysfs.txt  
-r--r--r-- 1 root root  2075 Apr  8 16:33 request.txt  
-r--r--r-- 1 root root  3272 Apr  8 16:33 stat.txt  
-r--r--r-- 1 root root  1414 Apr  8 16:33 switching-sched.txt  
-r--r--r-- 1 root root  3916 Apr  8 16:33 writeback_cache_control.txt  

如果你要详细了解这些调度策略的规则,可以查看WIKI或者看内核文档。

从这里可以看到它的调度策略

cat /sys/block/vdb/queue/scheduler   
noop [deadline] cfq   

修改

echo deadline > /sys/block/hda/queue/scheduler  

或者修改启动参数

grub.conf  
elevator=deadline  

从很多测试结果来看,数据库使用deadline调度,性能会更稳定一些。

其他

1. 关闭透明大页

2. 禁用NUMA

3. SSD的对齐

PostgreSQL on Linux 最佳部署手册

背景

数据库的安装一直以来都挺复杂的,特别是Oracle,现在身边都还有安装Oracle数据库赚外快的事情。

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。很多用户使用默认的方法安装好数据库之后,然后测试一通性能,发现性能不行就不用了。

原因不用说,多方面没有优化的结果。

PostgreSQL数据库为了适应更多的场景能使用,默认的参数都设得非常保守,通常需要优化,比如检查点,SHARED BUFFER等。

本文将介绍一下PostgreSQL on Linux的最佳部署方法,其实在我的很多文章中都有相关的内容,但是没有总结成一篇文档。

OS与硬件认证检查

目的是确认服务器与OS通过certification

Intel Xeon v3和v4的cpu,能支持的RHEL的最低版本是不一样的,

详情请见:https://access.redhat.com/support/policy/intel

Intel Xeon v3和v4的cpu,能支持的Oracle Linux 的最低版本是不一样的,

详情请见:http://linux.oracle.com/pls/apex/f?p=117:1

第一:RedHat生态系统–来自RedHat的认证列表https://access.redhat.com/ecosystem

第二:Oracle Linux 对服务器和存储的硬件认证列表 http://linux.oracle.com/pls/apex/f?p=117:1

安装常用包

# yum -y install coreutils glib2 lrzsz mpstat dstat sysstat e4fsprogs xfsprogs ntp readline-devel zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc make smartmontools flex bison perl-devel perl-ExtUtils* openldap-devel jadetex  openjade bzip2

配置OS内核参数

1. sysctl

注意某些参数,根据内存大小配置(已说明)

含义详见

《DBA不可不知的操作系统内核参数》

# vi /etc/sysctl.conf

# add by digoal.zhou
fs.aio-max-nr = 1048576
fs.file-max = 76724600
kernel.core_pattern= /data01/corefiles/core_%e_%u_%t_%s.%p         
# /data01/corefiles事先建好,权限777,如果是软链接,对应的目录修改为777
kernel.sem = 4096 2147483647 2147483646 512000    
# 信号量, ipcs -l 或 -u 查看,每16个进程一组,每组信号量需要17个信号量。
kernel.shmall = 107374182      
# 所有共享内存段相加大小限制(建议内存的80%)
kernel.shmmax = 274877906944   
# 最大单个共享内存段大小(建议为内存一半), >9.2的版本已大幅降低共享内存的使用
kernel.shmmni = 819200         
# 一共能生成多少共享内存段,每个PG数据库集群至少2个共享内存段
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 262144       
# The default setting of the socket receive buffer in bytes.
net.core.rmem_max = 4194304          
# The maximum receive socket buffer size in bytes
net.core.wmem_default = 262144       
# The default setting (in bytes) of the socket send buffer.
net.core.wmem_max = 4194304          
# The maximum send socket buffer size in bytes.
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_keepalive_intvl = 20
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syncookies = 1    
# 开启SYN Cookies。当出现SYN等待队列溢出时,启用cookie来处理,可防范少量的SYN攻击
net.ipv4.tcp_timestamps = 1    
# 减少time_wait
net.ipv4.tcp_tw_recycle = 0    
# 如果=1则开启TCP连接中TIME-WAIT套接字的快速回收,但是NAT环境可能导致连接失败,建议服务端关闭它
net.ipv4.tcp_tw_reuse = 1      
# 开启重用。允许将TIME-WAIT套接字重新用于新的TCP连接
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.nf_conntrack_max = 1200000
net.netfilter.nf_conntrack_max = 1200000
vm.dirty_background_bytes = 409600000       
#  系统脏页到达这个值,系统后台刷脏页调度进程 pdflush(或其他) 自动将(dirty_expire_centisecs/100)秒前的脏页刷到磁盘
vm.dirty_expire_centisecs = 3000             
#  比这个值老的脏页,将被刷到磁盘。3000表示30秒。
vm.dirty_ratio = 95                          
#  如果系统进程刷脏页太慢,使得系统脏页超过内存 95 % 时,则用户进程如果有写磁盘的操作(如fsync, fdatasync等调用),则需要主动把系统脏页刷出。
#  有效防止用户进程刷脏页,在单机多实例,并且使用CGROUP限制单实例IOPS的情况下非常有效。  
vm.dirty_writeback_centisecs = 100            
#  pdflush(或其他)后台刷脏页进程的唤醒间隔, 100表示1秒。
vm.mmap_min_addr = 65536
vm.overcommit_memory = 0     
#  在分配内存时,允许少量over malloc, 如果设置为 1, 则认为总是有足够的内存,内存较少的测试环境可以使用 1 .  
vm.overcommit_ratio = 90     
#  当overcommit_memory = 2 时,用于参与计算允许指派的内存大小。
vm.swappiness = 0            
#  关闭交换分区
vm.zone_reclaim_mode = 0     
# 禁用 numa, 或者在vmlinux中禁止. 
net.ipv4.ip_local_port_range = 40000 65535    
# 本地自动分配的TCP, UDP端口号范围
fs.nr_open=20480000
# 单个进程允许打开的文件句柄上限

# 以下参数请注意
# vm.extra_free_kbytes = 4096000
# vm.min_free_kbytes = 2097152
# 如果是小内存机器,以上两个值不建议设置
# vm.nr_hugepages = 66536    
#  建议shared buffer设置超过64GB时 使用大页,页大小 /proc/meminfo Hugepagesize
# vm.lowmem_reserve_ratio = 1 1 1
# 对于内存大于64G时,建议设置,否则建议默认值 256 256 32

2. 生效配置

sysctl -p

配置OS资源限制

# vi /etc/security/limits.conf

# nofile超过1048576的话,一定要先将sysctl的fs.nr_open设置为更大的值,并生效后才能继续设置nofile.

* soft    nofile  1024000
* hard    nofile  1024000
* soft    nproc   unlimited
* hard    nproc   unlimited
* soft    core    unlimited
* hard    core    unlimited
* soft    memlock unlimited
* hard    memlock unlimited

最好在关注一下/etc/security/limits.d目录中的文件内容,会覆盖limits.conf的配置。

已有进程的ulimit请查看/proc/pid/limits,例如

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             11286                11286                processes 
Max open files            1024                 4096                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       11286                11286                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

如果你要启动其他进程,建议退出SHELL再进一遍,确认ulimit环境配置已生效,再启动。

配置OS防火墙

(建议按业务场景设置,我这里先清掉)

iptables -F

配置范例

# 私有网段
-A INPUT -s 192.168.0.0/16 -j ACCEPT
-A INPUT -s 10.0.0.0/8 -j ACCEPT
-A INPUT -s 172.16.0.0/16 -j ACCEPT

selinux

如果没有这方面的需求,建议禁用

# vi /etc/sysconfig/selinux 

SELINUX=disabled
SELINUXTYPE=targeted

关闭不必要的OS服务

chkconfig --list|grep on  
关闭不必要的,例如 
chkconfig iscsi off

部署文件系统

注意SSD对齐,延长寿命,避免写放大。

parted -s /dev/sda mklabel gpt
parted -s /dev/sda mkpart primary 1MiB 100%

格式化(如果你选择ext4的话)

mkfs.ext4 /dev/sda1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -T largefile -L u01

建议使用的ext4 mount选项

# vi /etc/fstab

LABEL=u01 /u01     ext4        defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback    0 0

# mkdir /u01
# mount -a

为什么需要data=writeback?

pic

建议pg_xlog放到独立的IOPS性能贼好的块设备中。

设置SSD盘的调度为deadline

如果不是SSD的话,还是使用CFQ,否则建议使用DEADLINE。

临时设置(比如sda盘)

echo deadline > /sys/block/sda/queue/scheduler

永久设置

编辑grub文件修改块设备调度策略

vi /boot/grub.conf

elevator=deadline

注意,如果既有机械盘,又有SSD,那么可以使用/etc/rc.local,对指定磁盘修改为对应的调度策略。

关闭透明大页、numa

加上前面的默认IO调度,如下

vi /boot/grub.conf

elevator=deadline numa=off transparent_hugepage=never 

编译器

建议使用较新的编译器,安装 gcc 6.2.0 略

cd ~
tar -jxvf gcc6.2.0.tar.bz2
tar -jxvf python2.7.12.tar.bz2


# vi /etc/ld.so.conf

/home/digoal/gcc6.2.0/lib
/home/digoal/gcc6.2.0/lib64
/home/digoal/python2.7.12/lib

# ldconfig

环境变量

# vi ~/env_pg.sh

export PS1="$USER@`/bin/hostname -s`-> "
export PGPORT=$1
export PGDATA=/$2/digoal/pg_root$PGPORT
export LANG=en_US.utf8
export PGHOME=/home/digoal/pgsql9.6
export LD_LIBRARY_PATH=/home/digoal/gcc6.2.0/lib:/home/digoal/gcc6.2.0/lib64:/home/digoal/python2.7.12/lib:$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/home/digoal/gcc6.2.0/bin:/home/digoal/python2.7.12/bin:/home/digoal/cmake3.6.3/bin:$PGHOME/bin:$PATH:.
export DATE=`date +"%Y%m%d%H%M"`
export MANPATH=$PGHOME/share/man:$MANPATH
export PGHOST=$PGDATA
export PGUSER=postgres
export PGDATABASE=postgres
alias rm='rm -i'
alias ll='ls -lh'
unalias vi

icc, clang

如果你想使用ICC或者clang编译PostgreSQL,请参考

《[转载]用intel编译器icc编译PostgreSQL》

《PostgreSQL clang vs gcc 编译》

编译PostgreSQL

建议使用NAMED_POSIX_SEMAPHORES

src/backend/port/posix_sema.c

create sem : 
named :
                mySem = sem_open(semname, O_CREAT | O_EXCL,
                                                 (mode_t) IPCProtection, (unsigned) 1);


unamed :
/*
 * PosixSemaphoreCreate
 *
 * Attempt to create a new unnamed semaphore.
 */
static void
PosixSemaphoreCreate(sem_t * sem)
{
        if (sem_init(sem, 1, 1) < 0)
                elog(FATAL, "sem_init failed: %m");
}


remove sem : 

#ifdef USE_NAMED_POSIX_SEMAPHORES
        /* Got to use sem_close for named semaphores */
        if (sem_close(sem) < 0)
                elog(LOG, "sem_close failed: %m");
#else
        /* Got to use sem_destroy for unnamed semaphores */
        if (sem_destroy(sem) < 0)
                elog(LOG, "sem_destroy failed: %m");
#endif

编译项

. ~/env_pg.sh 1921 u01

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" ./configure --prefix=/home/digoal/pgsql9.6
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make install-world

如果你是开发环境,需要调试,建议这样编译。

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" ./configure --prefix=/home/digoal/pgsql9.6 --enable-cassert
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make install-world

初始化数据库集群

pg_xlog建议放在IOPS最好的分区。

. ~/env_pg.sh 1921 u01
initdb -D $PGDATA -E UTF8 --locale=C -U postgres -X /u02/digoal/pg_xlog$PGPORT

配置postgresql.conf

以PostgreSQL 9.6, 512G内存主机为例

最佳到文件末尾即可,重复的会以末尾的作为有效值。  
  
$ vi postgresql.conf

listen_addresses = '0.0.0.0'
port = 1921
max_connections = 5000
unix_socket_directories = '.'
tcp_keepalives_idle = 60
tcp_keepalives_interval = 10
tcp_keepalives_count = 10
shared_buffers = 128GB
maintenance_work_mem = 4GB
dynamic_shared_memory_type = posix
vacuum_cost_delay = 0
bgwriter_delay = 10ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 10.0
bgwriter_flush_after = 0  # IO很好的机器,不需要考虑平滑调度
max_worker_processes = 128
max_parallel_workers_per_gather = 0
old_snapshot_threshold = -1
backend_flush_after = 0  # IO很好的机器,不需要考虑平滑调度
wal_level = replica
synchronous_commit = off
full_page_writes = on
wal_buffers = 1GB
wal_writer_delay = 10ms
wal_writer_flush_after = 0  # IO很好的机器,不需要考虑平滑调度
checkpoint_timeout = 30min  # 不建议频繁做检查点,否则XLOG会产生很多的FULL PAGE WRITE。
max_wal_size = 256GB  # 建议是SHARED BUFFER的2倍
min_wal_size = 64GB
checkpoint_completion_target = 0.05  # 硬盘好的情况下,可以让检查点快速结束,恢复时也可以快速达到一致状态。
checkpoint_flush_after = 0  # IO很好的机器,不需要考虑平滑调度
archive_mode = on
archive_command = '/bin/date'      #  后期再修改,如  'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'
max_wal_senders = 8
random_page_cost = 1.3  # IO很好的机器,不需要考虑离散和顺序扫描的成本差异
parallel_tuple_cost = 0
parallel_setup_cost = 0
min_parallel_relation_size = 0
effective_cache_size = 300GB  # 看着办,剩下的都是OS可用的CACHE。
force_parallel_mode = off
log_destination = 'csvlog'
logging_collector = on
log_truncate_on_rotation = on
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose
log_timezone = 'PRC'
vacuum_defer_cleanup_age = 0
hot_standby_feedback = off
max_standby_archive_delay = 300s
max_standby_streaming_delay = 300s
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 16  # CPU核多,并且IO好的情况下,可多点,但是注意16*autovacuum mem,会消耗较多内存,所以内存也要有基础。  
autovacuum_naptime = 30s
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 1600000000
autovacuum_multixact_freeze_max_age = 1600000000
vacuum_freeze_table_age = 1500000000
vacuum_multixact_freeze_table_age = 1500000000
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
default_text_search_config = 'pg_catalog.english'
shared_preload_libraries='pg_stat_statements'

## 如果你的数据库有非常多小文件(比如有几十万以上的表,还有索引等,并且每张表都会被访问到时),建议FD可以设多一些,避免进程需要打开关闭文件。
## 但是不要大于前面章节系统设置的ulimit -n(open files)
max_files_per_process=655360

配置pg_hba.conf

避免不必要的访问,开放允许的访问,建议务必使用密码访问。

$ vi pg_hba.conf

host replication xx 0.0.0.0/0 md5  # 流复制

host all postgres 0.0.0.0/0 reject # 拒绝超级用户从网络登录
host all all 0.0.0.0/0 md5  # 其他用户登陆

启动数据库

pg_ctl start

好了,你的PostgreSQL数据库基本上部署好了,可以愉快的玩耍了。

分析java进程假死情况

1 引言

 

1.1 编写目的

 

为了方便大家以后发现进程假死的时候能够正常的分析并且第一时间保留现场快照。

 

1.2编写背景

 

最近服务器发现tomcat的应用会偶尔出现无法访问的情况。经过一段时间的观察最近又发现有台tomcat的应用出现了无法访问情况。简单描述下该台tomcat当时具体的表现:客户端请求没有响应,查看服务器端tomcat的进程是存活的,查看业务日志的时候发现日志停止没有任何最新的访问日志。连tomcat下面的catalina.log也没有任何访问记录,基本断定该台tomcat已不能提供服务。

 

2 分析步骤

 

根据前面我描述的假死现象,我最先想到的是网络是否出现了问题,是不是有什么丢包严重的情况,于是我开始从请求的数据流程开始分析,由于我们业务的架构采用的是nginx+tomcat的集群配置,一个请求上来的流向可以用下图来简单的描述一下:

 

 

 

2.1检查nginx的网络情况

 

更改nginx的配置,让该台nginx请求只转到本机器的出现问题的tomcat应用上面,在access.log里看是否有网络请求,结果可以查看到当前所有的网络请求,也就是说可以排除是网络的问题。

 

2.2检查tomcat 的网络情况

 

分析业务配置的tomcat访问日志xxxx.log上是否有日志访问记录,经过查询该台tomcat应用日志完全没有任何访问记录,由于我们的部署是本机的nginx转到本机的tomcat应用,所以可以排除不是网络问题。到此基本可以断定网络没有问题,tomcat 本身出现了假死的情况。在tomcat的日志里有报过OutOfMemoryError的异常,所以可以肯定tomcat假死的原因是OOM

 

3 分析JVM内存溢出

 

3.1为什么会发生内存泄漏

 

在我们学习Java的时候就知道它最为方便的地方就是我们不需要管理内存的分配和释放,一切由JVM自己来进行处理,当Java对象不再被应用时,等到堆内存不够用时JVM会进行GC处理,清除这些对象占用的堆内存空间,但是如果对象一直被应用,那么JVM是无法对其进行GC处理的,那么我们创建新的对象时,JVM就没有办法从堆中获取足够的内存分配给此对象,这时就会导致OOM。我们出现OOM原因,一般都是因为我们不断的往容器里存放对象,然而容器没有相应的大小限制或清除机制,这样就容易导致OOM。

 

3.2快速定位问题

 

当我们的应用服务器占用了过多内存的时候,我们怎么样才能快速的定位问题呢?要想快速定位问题,首先我们必需获取服务器JVM某时刻的内存快照。Jdk里面提供了很多相应的命令比如:jstack,jstat,jmap,jps等等. 在出现问题后我们应该快速保留现场。

 

3.2.1 jstack

 

可以观察到jvm中当前所有线程的运行情况和线程当前状态.

 

sudo jstack -F 进程ID
输出内容如下:

从上面的图我们可以看到tomcat进程里面没有死锁的情况,而且每个线程都处理等待的状态。这个时候我们可以telnet命令连上tomcat的端口查看tomcat进程是否有任务回应。这时发现tomcat没有任何回应可以证明tomcat应用已没有响应处理假死状态。

 

3.2.2 jstat

这是jdk命令中比较重要,也是相当实用的一个命令,可以观察到classloader,compiler,gc相关信息
具体参数如下:
-class:统计class loader行为信息
-compile:统计编译行为信息
-gc:统计jdk gc时heap信息
-gccapacity:统计不同的generations(包括新生区,老年区,permanent区)相应的heap容量情况
-gccause:统计gc的情况,(同-gcutil)和引起gc的事件
-gcnew:统计gc时,新生代的情况
-gcnewcapacity:统计gc时,新生代heap容量
-gcold:统计gc时,老年区的情况
-gcoldcapacity:统计gc时,老年区heap容量
-gcpermcapacity:统计gc时,permanent区heap容量
-gcutil:统计gc时,heap情况
-printcompilation:不知道干什么的,一直没用过。

一般比较常用的几个参数是:
sudo jstat -class 2083 1000 10 (每隔1秒监控一次,一共做10次)

 

查看当时的head情况

 

sudo jstat -gcutil  20683 2000

注:该图不是出错截取

 

出现时候截取的数据是gc已经完全没有处理了,因为没有加上full gc的日志所以不确定JVM GC 时间过长,导致应用暂停.

 

3.2.3获取内存快照

 

Jdk自带的jmap可以获取内在某一时刻的快照

 

命令:jmap -dump:format=b,file=heap.bin <pid>
file:保存路径及文件名
pid:进程编号(windows通过任务管理器查看,linux通过ps aux查看)
dump文件可以通过MemoryAnalyzer分析查看,网址:http://www.eclipse.org/mat/,可以查看dump时对象数量,内存占用,线程情况等。

从上面的图可以看得出来对象没有内存溢出。

从上图我们可以明确的看出此项目的HashMap内存使用率比较高,因为我们的系统都是返回Map的数据结构所以占用比较高的内存是正常情况。

 

 

3.2.4观察运行中的jvm物理内存的占用情况

 

观察运行中的jvm物理内存的占用情况。我们也可以用jmap命令
参数如下:
-heap:打印jvm heap的情况
-histo:打印jvm heap的直方图。其输出信息包括类名,对象数量,对象占用大小。
-histo:live :同上,但是只答应存活对象的情况
-permstat:打印permanent generation heap情况

命令使用:
jmap -heap 2083
可以观察到New Generation(Eden Space,From Space,To Space),tenured generation,Perm Generation的内存使用情况
输出内容:

上图为tomcat应用出错前JVM的配置信息,可以明确的看到当时的信息:

 

MaxHeapSize堆内存大小为:3500M

 

MaxNewSize新生代内存大小:512M

 

PermSize永久代内存大小:192M

 

NewRatio设置年轻代(包括Eden和两个Survivor区)与年老代的比值(除去持久代)。设置为2,则年轻代与年老代所占比值为1:2,年轻代占整个堆栈的1/3

 

SurvivorRatio设置年轻代中Eden区与Survivor区的大小比值。设置为8,则两个Survivor区与一个Eden区的比值为2:8,一个Survivor区占整个年轻代的1/10

 

 

 

在New Generation中,有一个叫Eden的空间,主要是用来存放新生的对象,还有两个Survivor Spaces(from,to), 它们用来存放每次垃圾回收后存活下来的对象。在Old Generation中,主要存放应用程序中生命周期长的内存对象,还有个Permanent Generation,主要用来放JVM自己的反射对象,比如类对象和方法对象等。

 

 

 

 

从上面的图可以看出来JVM的新生代设置太小,可以看出应用的新生代区完全占满了,无法再往新生代区增加新的对象此时的这些对象都处于活跃状态,所以不会被GC处理,但是tomcat应用还在继续产生新的对象,这样就会导致OOM的发生,这就是导致tomcat假死的原因.

 

4 Tomcat假死其它情况

 

以下是网上资料说的tomcat假的情况:

 

1、应用本身程序的问题,造成死锁。

 

2、load 太高,已经超出服务的极限

 

3、jvm GC 时间过长,导致应用暂停

 

因为出错项目里面没有打出GC的处理情况,所以不确定此原因是否也是我项目tomcat假死的原因之一。

 

4、大量tcp 连接 CLOSE_WAIT

 

netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}’

 

TIME_WAIT 48

 

CLOSE_WAIT 2228

 

ESTABLISHED 86

 

常用的三个状态是:ESTABLISHED 表示正在通信,TIME_WAIT 表示主动关闭,CLOSE_WAIT 表示被动关

JetBrains IntelliJ IDEA Ultimate 2016.3.4 Crack

How to crack

1.Install the latest version of IntelliJ IDEA (v2016.3.4)
2.Start it
3.When you have to enter the license, change to [License server]
In the Server URL input field enter: http://idea.strongd.net/ . For older Servers, check out the bottom of the page, they are all listed

4.Click on [Ok] and everything should work

Building TensorFlow for Raspberry Pi: a Step-By-Step Guide

What You Need

  • Raspberry Pi 2 or 3 Model B
  • An SD card running Raspbian with several GB of free space
    • An 8 GB card with a fresh install of Raspbian does not have enough space. A 16 GB SD card minimum is recommended.
    • These instructions may work on Linux distributions other than Raspbian
  • Internet connection to the Raspberry Pi
  • A USB memory drive that can be installed as swap memory (if it is a flash drive, make sure you don’t care about the drive). Anything over 1 GB should be fine
  • A fair amount of time

Overview

These instructions were crafted for a Raspberry Pi 3 Model B running a vanilla copy of Raspbian 8.0 (jessie). It appears to work on Raspberry Pi 2, but there are some kinks that are being worked out. If these instructions work for different distributions, let me know!

Here’s the basic plan: build a 32-bit version of Protobuf, use that to build a RPi-friendly version of Bazel, and finally use Bazel to build TensorFlow.

The Build

1. Install basic dependencies

First, update apt-get to make sure it knows where to download everything.

sudo apt-get update

Next, install some base dependencies and tools we’ll need later.

For Protobuf:

sudo apt-get install autoconf automake libtool maven

For gRPC:

sudo apt-get install oracle-java7-jdk
# Select the jdk-7-oracle option for the update-alternatives command
sudo update-alternatives --config java

For Bazel:

sudo apt-get install pkg-config zip g++ zlib1g-dev unzip

For TensorFlow:

# For Python 2.7
sudo apt-get install python-pip python-numpy swig python-dev
sudo pip install wheel

# For Python 3.3+
sudo apt-get install python3-pip python3-numpy swig python3-dev
sudo pip3 install wheel

To be able to take advantage of certain optimization flags:

sudo apt-get install gcc-4.8 g++-4.8
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.8 100

Finally, for cleanliness, make a directory that will hold the Protobuf, Bazel, and TensorFlow repositories.

mkdir tf
cd tf

2. Build Protobuf

Clone the Protobuf repository.

git clone https://github.com/google/protobuf.git

Now move into the new protobuf directory, configure it, and make it. Note: this takes a little while.

cd protobuf
git checkout v3.0.0-beta-3.3
./autogen.sh
./configure --prefix=/usr
make -j 4
sudo make install

Once it’s made, we can move into the java directory and use Maven to build the project.

cd java
mvn package

After following these steps, you’ll have two spiffy new files: /usr/bin/protoc and protobuf/java/core/target/protobuf-java-3.0.0-beta3.jar

3. Build gRPC

Next, we need to build gRPC-Java, the Java implementation of gRPC. Move out of the protobuf/java directory and clone gRPC’s repository.

cd ../..
git clone https://github.com/grpc/grpc-java.git
cd grpc-java
git checkout v0.14.1
cd compiler
nano build.gradle

Around line 47:

gcc(Gcc) {
    target("linux_arm-v7") {
        cppCompiler.executable = "/usr/bin/gcc"
    }
}

Around line 60, add section for 'linux_arm-v7':

...
    x86_64 {
        architecture "x86_64"
    }
    'linux_arm-v7' {
        architecture "arm32"
        operatingSystem "linux"
    }

Around line 64, add 'arm32' to list of architectures:

...
components {
    java_plugin(NativeExecutableSpec) {
            if (arch in ['x86_32', 'x86_64', 'arm32'])
...

Around line 148, replace content inside of protoc section to hard code path to protoc binary:

protoc {
    path = '/usr/bin/protoc'
}

Once all of that is taken care of, run this command to build gRPC:

../gradlew java_pluginExecutable

4. Build Bazel

First, move out of the grpc-java/compiler directory and clone Bazel’s repository.

cd ../..
git clone https://github.com/bazelbuild/bazel.git

Next, go into the new bazel directory and immediately checkout version 0.3.1 of Bazel.

cd bazel
git checkout 0.3.2

After that, copy the generated Protobuf and gRPC files we created earlier into the Bazel project. Note the naming of the files in this step- it must be precise.

sudo cp /usr/bin/protoc third_party/protobuf/protoc-linux-arm32.exe
sudo cp ../protobuf/java/core/target/protobuf-java-3.0.0-beta-3.jar third_party/protobuf/protobuf-java-3.0.0-beta-1.jar
sudo cp ../grpc-java/compiler/build/exe/java_plugin/protoc-gen-grpc-java third_party/grpc/protoc-gen-grpc-java-0.15.0-linux-x86_32.exe

Before building Bazel, we need to set the javac maximum heap size for this job, or else we’ll get an OutOfMemoryError. To do this, we need to make a small addition to bazel/scripts/bootstrap/compile.sh. (Shout-out to @SangManLINUX for pointing this out..

nano scripts/bootstrap/compile.sh

Around line 46, you’ll find this code:

if [ "${MACHINE_IS_64BIT}" = 'yes' ]; then
    PROTOC=${PROTOC:-third_party/protobuf/protoc-linux-x86_64.exe}
    GRPC_JAVA_PLUGIN=${GRPC_JAVA_PLUGIN:-third_party/grpc/protoc-gen-grpc-java-0.15.0-linux-x86_64.exe}
else
    if [ "${MACHINE_IS_ARM}" = 'yes' ]; then
        PROTOC=${PROTOC:-third_party/protobuf/protoc-linux-arm32.exe}
    else
        PROTOC=${PROTOC:-third_party/protobuf/protoc-linux-x86_32.exe}
        GRPC_JAVA_PLUGIN=${GRPC_JAVA_PLUGIN:-third_party/grpc/protoc-gen-grpc-java-0.15.0-linux-x86_32.exe}
    fi
fi

Change it to the following:

if [ "${MACHINE_IS_64BIT}" = 'yes' ]; then
    PROTOC=${PROTOC:-third_party/protobuf/protoc-linux-x86_64.exe}
    GRPC_JAVA_PLUGIN=${GRPC_JAVA_PLUGIN:-third_party/grpc/protoc-gen-grpc-java-0.15.0-linux-x86_64.exe}
else
    PROTOC=${PROTOC:-third_party/protobuf/protoc-linux-arm32.exe}
    GRPC_JAVA_PLUGIN=${GRPC_JAVA_PLUGIN:-third_party/grpc/protoc-gen-grpc-java-0.15.0-linux-linux-arm32.exe}
fi

Move down to line 149, where you’ll see the following block of code:

run "${JAVAC}" -classpath "${classpath}" -sourcepath "${sourcepath}" \
      -d "${output}/classes" -source "$JAVA_VERSION" -target "$JAVA_VERSION" \
      -encoding UTF-8 "@${paramfile}"

At the end of this block, add in the -J-Xmx500M flag, which sets the maximum size of the Java heap to 500 MB:

run "${JAVAC}" -classpath "${classpath}" -sourcepath "${sourcepath}" \
      -d "${output}/classes" -source "$JAVA_VERSION" -target "$JAVA_VERSION" \
      -encoding UTF-8 "@${paramfile}" -J-Xmx500M

Next up, we need to adjust third_party/protobuf/BUILD – open it up in your text editor.

nano third_party/protobuf/BUILD

We need to add this last line around line 29:

...
    "//third_party:freebsd": ["protoc-linux-x86_32.exe"],
    "//third_party:arm": ["protoc-linux-arm32.exe"],
}),
...

Finally, we have to add one thing to tools/cpp/cc_configure.bzl – open it up for editing:

nano tools/cpp/cc_configure.bzl

And place this in around line 141 (at the beginning of the _get_cpu_value function):

...
"""Compute the cpu_value based on the OS name."""
return "arm"
...

Now we can build Bazel! Note: this also takes some time.

sudo ./compile.sh

When the build finishes, you end up with a new binary, output/bazel. Copy that to your /usr/local/bin directory.

sudo mkdir /usr/local/bin
sudo cp output/bazel /usr/local/bin/bazel

To make sure it’s working properly, run bazel on the command line and verify it prints help text. Note: this may take 15-30 seconds to run, so be patient!

$ bazel

Usage: bazel <command> <options> ...

Available commands:
  analyze-profile     Analyzes build profile data.
  build               Builds the specified targets.
  canonicalize-flags  Canonicalizes a list of bazel options.
  clean               Removes output files and optionally stops the server.
  dump                Dumps the internal state of the bazel server process.
  fetch               Fetches external repositories that are prerequisites to the targets.
  help                Prints help for commands, or the index.
  info                Displays runtime info about the bazel server.
  mobile-install      Installs targets to mobile devices.
  query               Executes a dependency graph query.
  run                 Runs the specified target.
  shutdown            Stops the bazel server.
  test                Builds and runs the specified test targets.
  version             Prints version information for bazel.

Getting more help:
  bazel help <command>
                   Prints help and options for <command>.
  bazel help startup_options
                   Options for the JVM hosting bazel.
  bazel help target-syntax
                   Explains the syntax for specifying targets.
  bazel help info-keys
                   Displays a list of keys used by the info command.

Move out of the bazel directory, and we’ll move onto the next step.

cd ..

5. Install a Memory Drive as Swap for Compiling

In order to succesfully build TensorFlow, your Raspberry Pi needs a little bit more memory to fall back on. Fortunately, this process is pretty straightforward. Grab a USB storage drive that has at least 1GB of memory. I used a flash drive I could live without that carried no important data. That said, we’re only going to be using the drive as swap while we compile, so this process shouldn’t do too much damage to a relatively new USB drive.

First, put insert your USB drive, and find the /dev/XXX path for the device.

sudo blkid

As an example, my drive’s path was /dev/sda1

Once you’ve found your device, unmount it by using the umount command.

sudo umount /dev/XXX

Then format your device to be swap:

sudo mkswap /dev/XXX

If the previous command outputted an alphanumeric UUID, copy that now. Otherwise, find the UUID by running blkid again. Copy the UUID associated with /dev/XXX

sudo blkid

Now edit your /etc/fstab file to register your swap file. (I’m a Vim guy, but Nano is installed by default)

sudo nano /etc/fstab

On a separate line, enter the following information. Replace the X’s with the UUID (without quotes)

UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none swap sw,pri=5 0 0

Save /etc/fstab, exit your text editor, and run the following command:

sudo swapon -a

If you get an error claiming it can’t find your UUID, go back and edit /etc/fstab. Replace the UUID=XXX.. bit with the original /dev/XXX information.

sudo nano /etc/fstab
# Replace the UUID with /dev/XXX
/dev/XXX none swap sw,pri=5 0 0

Alright! You’ve got swap! Don’t throw out the /dev/XXX information yet- you’ll need it to remove the device safely later on.

6. Compiling TensorFlow

First things first, clone the TensorFlow repository and move into the newly created directory.

git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow

Note: if you’re looking to build to a specific version or commit of TensorFlow (as opposed to the HEAD at master), you should git checkout it now.

Once in the directory, we have to write a nifty one-liner that is incredibly important. The next line goes through all files and changes references of 64-bit program implementations (which we don’t have access to) to 32-bit implementations. Neat!

grep -Rl 'lib64' | xargs sed -i 's/lib64/lib/g'

Next, we need to delete a particular line in tensorflow/core/platform/platform.h. Open up the file in your favorite text editor:

$ sudo nano tensorflow/core/platform/platform.h

Now, scroll down toward the bottom and delete the following line containing #define IS_MOBILE_PLATFORM:

#elif defined(__arm__)
#define PLATFORM_POSIX
...
#define IS_MOBILE_PLATFORM   <----- DELETE THIS LINE

This keeps our Raspberry Pi device (which has an ARM CPU) from being recognized as a mobile device.

Now let’s configure Bazel:

$ ./configure

Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] N
Do you wish to build TensorFlow with GPU support? [y/N] N

Note: if you want to build for Python 3, specify /usr/bin/python3 for Python’s location.

Now we can use it to build TensorFlow! Warning: This takes a really, really long time. Several hours.

bazel build -c opt --copt="-mfpu=neon-vfpv4" --copt="-funsafe-math-optimizations" --copt="-ftree-vectorize" --local_resources 1024,1.0,1.0 --verbose_failures tensorflow/tools/pip_package:build_pip_package

Note: I toyed around with telling Bazel to use all four cores in the Raspberry Pi, but that seemed to make compiling more prone to completely locking up. This process takes a long time regardless, so I’m sticking with the more reliable options here. If you want to be bold, try using --local_resources 1024,2.0,1.0 or --local_resources 1024,4.0,1.0

When you wake up the next morning and it’s finished compiling, you’re in the home stretch! Use the built binary file to create a Python wheel.

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

And then install it!

sudo pip install /tmp/tensorflow_pkg/tensorflow-0.10-cp27-none-linux_armv7l.whl

7. Cleaning Up

There’s one last bit of house-cleaning we need to do before we’re done: remove the USB drive that we’ve been using as swap.

First, turn off your drive as swap:

sudo swapoff /dev/XXX

Finally, remove the line you wrote in /etc/fstab referencing the device

sudo nano /etc/fstab

Then reboot your Raspberry Pi.

And you’re done! You deserve a break.