直爽的小蝌蚪 · 用nodejs实现一个十进制数字转2个字节十 ...· 3 月前 · |
不要命的芒果 · 使用java上传文件到Minio - 简书· 1 年前 · |
另类的开水瓶 · WebBrowser之JS调用C++函数 ...· 1 年前 · |
玩足球的麻辣香锅 · Hemberg-lab单细胞转录组数据分析( ...· 1 年前 · |
不羁的烤红薯 · 毕业设计在iOS上使用OpenCV实现图片中 ...· 1 年前 · |
能力说明:
了解变量作用域、Java类的结构,能够创建带main方法可执行的java应用,从命令行运行java程序;能够使用Java基本数据类型、运算符和控制结构、数组、循环结构书写和运行简单的Java程序。
获取记录:
2018-09-22 【Java学习路线】Java语言基础自测考试 - 初级难度 大学参加技能测试未通过能力说明:
了解Docker是什么,能做什么,产生的背景,理念是怎样。熟悉基本的Docker用法,知道怎么通过帮助命令来完成相应的操作,搞清楚一个完整的Docker有哪几个部分组成。
获取记录:
2020-09-21 大学/社区-用户参加考试 2020-09-21 容器技术初级 大学/社区用户通过技能测试能力说明:
理解微服务架构与单体应用架构在开发模式与运维上的区别,了解分布式、容器、DevOps在微服务架构中的应用,理解微服务的设计原则与服务组件。了解Service Mesh概念与Istio基础知识。
获取记录:
2020-09-21 大学/社区-用户参加考试 2020-09-21 微服务初级 大学/社区用户通过技能测试 2020-08-07 微服务初级 大学参加技能测试未通过能力说明:
熟练掌握Linux常用命令、文件及用户管理、文本处理、Vim工具使用等,熟练掌握企业IP规划、子网划分、Linux的路由、网卡、以及其他企业级网络配置技术,可进行Web服务器(Nginx),以及数据库(My SQL)的搭建、配置、应用,可根据需求编写Shell脚本,通过常用工具进行linux服务器自动化运维。
获取记录:
2020-11-30 Linux运维高级 大学参加技能测试未通过 2020-11-30 大学/社区-用户参加考试 2020-05-25 Linux运维中级 大学参加技能测试未通过 2020-05-25 大学/社区-用户参加考试 2019-12-04 大学/社区-用户参加考试 2019-12-04 【Linux运维学习路线】Linux入门自测考试 - 高级难度 大学/社区用户通过技能测试 2019-12-04 【Linux运维学习路线】Linux入门自测考试 - 高级难度 大学参加技能测试未通过 2019-12-04 大学/社区-用户参加考试 2019-12-04 【Linux运维学习路线】Linux入门自测考试 - 中级难度 大学/社区用户通过技能测试 2019-12-04 大学/社区-用户参加考试 2019-12-04 【Linux运维学习路线】Linux入门自测考试 - 初级难度 大学/社区用户通过技能测试 2019-12-04 【Linux运维学习路线】Linux入门自测考试 - 初级难度 大学参加技能测试未通过在开发可靠性或实施弹性DevOps实践时,决策的核心是数据。如果不仔细监控正常运行时间,网络负载和资源使用情况等关键指标,您将无视在哪里花精力进行开发或完善操作实践。幸运的是,可以使用各种各样的监视工具来帮助您收集和查看此数据。 尽管尝试完全监视系统中的所有内容可能很诱人,但更集中的监视将更易于实现,并为您提供更多可操作的数据。当基于对客户影响的度量标准时,SLO之类的SRE实践最为有用。确定什么以及如何进行监视是一个重要的决定。在这篇博客文章中,我们将带您了解基础知识。我们还将建议一些流行的监视工具供您考虑。 在哪里实施监控 确定在系统体系结构中的哪个位置实施监视非常重要。这将使您能够围绕监视工具开发体系结构,而不必改造现有代码。根据实现的位置,监视工具将能够观察不同类型的数据。以下是最常见的监视实现类型的分类,以及提供该监视类型的工具示例: 资源监视:也称为服务器监视或基础结构监视,它通过收集有关服务器运行方式的数据来进行操作。资源监视工具报告RAM使用率,CPU负载和剩余磁盘空间。在具有物理服务器的体系结构中,有关硬件运行状况的信息(例如CPU温度和组件正常运行时间)也有助于避免服务器故障。在基于云的环境中,虚拟服务器系统的聚合更为有用。 网络监视:这将查看传入和传出计算机网络的数据。您的监视工具可捕获所有组件(如交换机,防火墙,服务器等)中的所有传入请求和传出响应。从网络监视收集的数据可以与来回的数据总量一样简单,也可以与特定请求的频率一样细微。 应用程序性能监视:APM解决方案收集有关整体服务执行情况的数据。这些工具会将自己的请求发送到服务,并跟踪指标,例如响应的速度和完整性。目的是推动对应用程序性能问题的检测和诊断,以确保服务以预期的水平运行。 第三方组件监视:这涉及监视体系结构中第三方组件的运行状况和可用性。在这个微服务时代,您的服务可能取决于外部服务(从云托管到广告服务器)的正常运行。像应用程序性能监视一样,工具可以根据自己的请求检查这些服务的状态。 您可能需要在整体解决方案中包括每种监视类型中的某些监视。优先考虑使用健壮的冗余监视工具,以确保不会遗漏潜在问题。同时,指标和警报应与服务绑定,以确保与业务影响相关。 您需要从数据中得到什么 拥有可操作的数据不仅与数据本身有关。为了正确响应监视工具报告的内容,您需要以最有用的方式显示数据。监视工具可以为您做一些事情: 当指标超过特定阈值时触发警报创建事件日志,根据参数突出显示创建一段时间内的指标图一目了然地提供关键服务运行状况组件的仪表板创建可以查询的日志数据库在制定开发决策或对事件做出响应时,请养成自问的习惯:“为了做出最佳选择,我现在需要考虑什么?” 可视化将包含哪些数据以及重要的指标。 开源与购买 要考虑的另一个重要点是,您将在哪里找到监视工具以及谁来维护它们。开源和可购买的工具各有优缺点。 开源监控工具 这些工具是免费的,这对于工具预算有限的公司来说是一个优势。它们也是完全可定制的,允许您将它们集成到自己的体系结构中。但是,这种定制将需要专门的开发时间,也许还需要专门的知识。此外,没有SLA保证可用性,安全性,更新频率等。您的团队将承担这些责任。 购买的监控工具 这些工具成本高昂,但具有开源工具无法提供的强大功能。服务提供商将负责保持工具的功能和最新状态。该提供商可能会提供客户服务,培训,文档和其他资源,以帮助您将工具与堆栈集成。在可靠性时代,值得考虑的是进行投资以确保监视的眼睛始终保持打开状态。 监控工具比较 以下是针对您的系统考虑的10种最流行的SRE和DevOps监视工具。 AppDynamics 是专注于APM的监视平台。他们提供的其他功能包括基于AI的见解,用于模拟客户旅程的最终用户监控以及具有集成收益分析的业务监控。您可以注册免费试用。DataDog是一个针对云规模服务的监视平台。它在可视化,警报以及数据合并和分析方面具有强大的功能。它们使性能指标与业务影响相关联。DataDog提供免费试用。Prometheus 是一种流行的开源监视工具,提供警报,查询,可视化和许多其他有用的功能。专门的开发社区提供了大量文档和说明,以帮助您快速入门。New Relic 是一个监视平台,提供了几个也可以独立使用的组件:New Relic APM(应用程序性能监视),New Relic Browser和New Relic Infrastructure。他们提供适用于iOS和Android的应用程序,为您提供更多监视选项。Nagios 提供开源( Nagios Core)和可购买的选件(Nagios XI)。它们提供了高度可定制的界面,并可以监视整个IT网络。它们还通过配置向导来突出其易用性,以引导用户设置新的监视服务。Dynatrace 允许与其监视平台进行跨团队协作,从而提供一个共享的监视数据单一存储库。它们还包括自主云功能以及将监视功能引入部署的物联网层的能力。他们还提供免费试用。Solarwinds 提供了几种产品,每种产品专门用于监视的不同领域:网络管理,系统管理,数据库管理,IT安全性,IT服务管理,应用程序管理和托管服务提供商。每个都可以免费试用。Site24x7 专门从事网站监视,提供诸如状态页和Web服务(例如AWS和Azure)运行状况诊断的工具。它们还提供综合Web事务监视,使您可以模拟使用情况并收集指标。他们根据所需的服务提供几种定价计划。SignalFx 提供了广泛的微服务集成,使您可以看到服务运行状况的完整图片。如果您的服务包含许多第三方组件,则这一点很重要。他们的重点是帮助您从单一模型到微服务模型构建您的体系结构。PRTG Network Monitor 是一项完整的监视服务,可以集成到架构的许多阶段和位置。它们在网络,单个服务器,特定应用程序以及介于两者之间的所有内容上提供监视。该提供程序还提供免费版本。
centos7 + 2c/G (笔者测试) step1 下载ansible-tower最新版 wget https://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz step2 解压tower到opt下,并配置初始密码 tar zxvf ansible-tower-setup-bundle-latest.el7.tar.gz -C /opt/ cd /opt/ansible-tower-setup-bundle-3.5.2-1.el7/ 更改配置如下: # cat inventory [tower] localhost ansible_connection=local [database] [all:vars] admin_password='admin' #增加,默认无 pg_host='' pg_port='' pg_database='awx' pg_username='awx' pg_password='awx' #增加,默认无 rabbitmq_username=tower rabbitmq_password='tower' #增加,默认无 rabbitmq_cookie=cookiemonster step3 执行sh setup sh setup.sh # 无报错,执行完成即可 step4 访问并激活无限hosts 访问:https://ip # 执行下面命令,刷新tower页面即可 echo codyguo > /var/lib/awx/i18n.db 友情链接:https://blog.csdn.net/CodyGuo/article/details/84136181<img src="https://img.shields.io/badge/Branch-master-green.svg?longCache=true" alt="Branch"> <img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?longCache=true" alt="Pull Requests"> <img src="https://img.shields.io/badge/License-GNU-blue.svg?longCache=true" alt="License"> Created bytrimstray andcontributors See the top 5 IP addresses in a web server log Analyse web server log and show only 2xx http codes Analyse web server log and show only 5xx http codes Get range of dates in a web server log Get line rates from web server log Trace network traffic for all Nginx processes List all files accessed by a Nginx Organising Nginx configuration Separate listen directives for 80 and 443 Prevent processing requests with undefined server names Use only one SSL config for specific listen directive Force all connections over TLS Use geo/map modules instead allow/deny Map all the things... Drop the same root inside location block Use debug mode for debugging Use custom log formats Performance Adjust worker processes Use HTTP/2 Maintaining SSL Sessions Use exact names where possible Hardening Run as an unprivileged user Disable unnecessary modules Protect sensitive resources Hide Nginx version number Hide Nginx server signature Hide upstream proxy headers Use only 4096-bit private keys Keep only TLS 1.2 (+ TLS 1.3) Use only strong ciphers Use more secure ECDH Curve Use strong Key Exchange Defend against the BEAST attack Disable HTTP compression (mitigation of CRIME/BREACH attacks) HTTP Strict Transport Security Reduce XSS risks (Content-Security-Policy) Control the behavior of the Referer header (Referrer-Policy) Provide clickjacking protection (X-Frame-Options) Prevent some categories of XSS attacks (X-XSS-Protection) Prevent Sniff Mimetype middleware (X-Content-Type-Options) Deny the use of browser features (Feature-Policy) Reject unsafe HTTP methods Control Buffer Overflow attacks Mitigating Slow HTTP DoS attack (Closing Slow Connections) Configuration examples Nginx Contexts Reverse Proxy Import configuration Set bind IP address Set your domain name Regenerate private keys and certs Add new domain Test your configuration Before using the Nginx please read Beginner’s Guide. Nginx (/ˌɛndʒɪnˈɛks/ EN-jin-EKS) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by Igor Sysoev. For a long time, it has been running on many heavily loaded Russian sites including Yandex, Mail.Ru, VK, and Rambler. To increase your knowledge, read Nginx Documentation. General disclaimer This is not an official handbook. Many of these rules refer to external resources. It is rather a quick collection of some rules used by me in production environments (not only). Before you start remember about the two most important things: Do not follow guides just to get 100% of something. Think about what you actually do at your server! These guidelines provides recommendations for very restrictive setup. Contributing If you find something which doesn't make sense, or one of these doesn't seem right, or something seems really stupid; please make a pull request or please add valid and well-reasoned opinions about your changes or comments. Before add pull request please see this. SSL Report: blkcipher.info Many of these recipes have been applied to the configuration of my private website. I finally got all 100%'s on my scores: <img src="https://github.com/trimstray/nginx-quick-reference/blob/master/doc/img/blkcipher_ssllabs_preview.png" alt="blkcipher_ssllabs_preview"> An example configuration is in this chapter. Printable high-res hardening checklist Hardening checklist based on this recipes (@ssllabs A+ 100%) - High-Res 5000x8200. For *.xcf and *.pdf formats please see this directory. <img src="https://github.com/trimstray/nginx-quick-reference/blob/master/doc/img/nginx-hardening-checklist.png" alt="nginx-hardening-checklist" width="75%" height="75%"> External Resources About Nginx :black_small_square: Nginx Project :black_small_square: Nginx Documentation :black_small_square: Nginx official read-only mirror References :black_small_square: Nginx boilerplate configs :black_small_square: Awesome Nginx configuration template :black_small_square: A collection of resources covering Nginx and more :black_small_square: Nginx Secure Web Server :black_small_square: Emiller’s Guide To Nginx Module Development Cheatsheets :black_small_square: Nginx Cheatsheet :black_small_square: Nginx Quick Reference :black_small_square: Nginx Cheatsheet by Mijdert Stuij Performance & Hardening :black_small_square: SSL/TLS Deployment Best Practices :black_small_square: SSL Server Rating Guide :black_small_square: How to Build a Tough NGINX Server in 15 Steps :black_small_square: Top 25 Nginx Web Server Best Security Practices :black_small_square: Strong SSL Security on Nginx :black_small_square: Nginx Tuning For Best Performance by Denji :black_small_square: Enable cross-origin resource sharing (CORS) :black_small_square: TLS has exactly one performance problem: it is not used widely enough :black_small_square: WAF for Nginx :black_small_square: ModSecurity for Nginx :black_small_square: Transport Layer Protection Cheat Sheet :black_small_square: Security/Server Side TLS Config generators :black_small_square: Nginx config generator on steroids Static analyzers :black_small_square: Nginx static analyzer Log analyzers :black_small_square: GoAccess :black_small_square: Graylog :black_small_square: Logstash Performance analyzers :black_small_square: ngxtop Benchmarking tools :black_small_square: siege :black_small_square: wrk :black_small_square: bombardier :black_small_square: gobench Online tools :black_small_square: SSL Server Test by SSL Labs :black_small_square: SSL/TLS Capabilities of Your Browser :black_small_square: Test SSL/TLS (PCI DSS, HIPAA and NIST) :black_small_square: SSL analyzer and certificate checker :black_small_square: Test your TLS server configuration (e.g. ciphers) :black_small_square: Scan your website for non-secure content :black_small_square: Strong ciphers for Apache, Nginx, Lighttpd and more :black_small_square: Analyse the HTTP response headers by Security Headers :black_small_square: Analyze your website by Mozilla Observatory :black_small_square: Linting tool that will help you with your site's accessibility, speed, security and more :black_small_square: Service to scan and analyse websites :black_small_square: Online tool to learn, build, & test Regular Expressions :black_small_square: Online Regex Tester & Debugger :black_small_square: User agent compatibility (Cipher suite) Other stuff :black_small_square: BBC Digital Media Distribution: How we improved throughput by 4x :black_small_square: Web cache server performance benchmark: nuster vs nginx vs varnish vs squid Helpers Shell aliases alias ng.test='nginx -t -c /etc/nginx/nginx.conf' alias ng.stop='ng.test && systemctl stop nginx' alias ng.reload='ng.test && systemctl reload nginx' alias ng.restart='ng.test && systemctl restart nginx' alias ng.restart='ng.test && kill -HUP `cat /var/run/nginx.pid`' Debugging See the top 5 IP addresses in a web server log cut -d ' ' -f1 /path/to/logfile | sort | uniq -c | sort -nr | head -5 | nl Analyse web server log and show only 2xx http codes tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [2]" Analyse web server log and show only 5xx http codes tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [5]" Get range of dates in a web server log awk '/'$(date -d "1 hours ago" "+%d\\/%b\\/%Y:%H:%M")'/,/'$(date "+%d\\/%b\\/%Y:%H:%M")'/ { print $0 }' /path/to/logfile awk '/05\/Feb\/2019:09:2.*/,/05\/Feb\/2019:09:5.*/' /path/to/logfile Get line rates from web server log tail -F /path/to/logfile | pv -N RAW -lc 1>/dev/null Trace network traffic for all Nginx processes strace -e trace=network -p `pidof nginx | sed -e 's/ /,/g'` List all files accessed by a Nginx strace -ff -e trace=file nginx 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print' Base rules :beginner: Organising Nginx configuration Rationale When your configuration grow, the need for organising your code will also grow. Well organised code is: easier to understand easier to maintain easier to work with Use include directive to attach your Nginx specific code to global config, contexts and other. Example # Store this configuration in e.g. https-ssl-common.conf listen 10.240.20.2:443 ssl; root /etc/nginx/error-pages/other; ssl_certificate /etc/nginx/domain.com/certs/nginx_domain.com_bundle.crt; ssl_certificate_key /etc/nginx/domain.com/certs/domain.com.key; # And include this file in server section: server { include /etc/nginx/domain.com/commons/https-ssl-common.conf; server_name domain.com www.domain.com; External resources Organize your data and code :beginner: Separate listen directives for 80 and 443 Rationale Example # For http: server { listen 10.240.20.2:80; # For https: server { listen 10.240.20.2:443 ssl; External resources Understanding the Nginx Configuration File Structure and Configuration Contexts :beginner: Prevent processing requests with undefined server names Rationale Nginx should prevent processing requests with undefined server names - also traffic on IP address. It also protects against configuration errors and don't pass traffic to incorrect backends. The problem is easily solved by creating a default catch all server config. If none of the listen directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair. If someone makes a request using an IP address instead of a server name, the Host request header field will contain the IP address and the request can be handled using the IP address as the server name. I think the best solution is return 444; for default server name because this will close the connection and log it internally, for any domain that isn't defined in Nginx. Example # Place it at the beginning of the configuration file to prevent mistakes. server { # Add default_server to your listen directive in the server that you want to act as the default. listen 10.240.20.2:443 default_server ssl; # We catch invalid domain names, requests without the "Host" header and all others (also due to the above setting). server_name _ "" default_server; return 444; # We can also serve: # location / { # static file (error page): # root /etc/nginx/error-pages/404; # or redirect: # return 301 https://badssl.com; # return 444; server { listen 10.240.20.2:443 ssl; server_name domain.com; server { listen 10.240.20.2:443 ssl; server_name domain.org; External resources Server names How nginx processes a request nginx: how to specify a default server :beginner: Use only one SSL config for specific listen directive Rationale For sharing a single IP address between several HTTPS servers you should use one SSL config (e.g. protocols, ciphers, curves) because changes will affect the default server. Remember that regardless of ssl parameters, you are able to use multiple SSL certificates. If you want to set up different SSL configurations for the same IP address then it will fail. It's important because SSL configuration is presented for default server - if none of the listen directives have the default_server parameter then the first server in your configuration. So you should use only one SSL setup with several names on the same IP address. Example # Store this configuration in e.g. https.conf listen 192.168.252.10:443 default_server ssl http2; ssl_protocols TLSv1.2; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384"; ssl_prefer_server_ciphers on; ssl_ecdh_curve secp521r1:secp384r1; # Include this file to the server context (attach domain-a.com for specific listen directive) server { include /etc/nginx/https.conf; server_name domain-a.com; # Include this file to the server context (attach domain-b.com for specific listen directive) server { include /etc/nginx/https.conf; server_name domain-b.com; External resources Nginx one ip and multiple ssl certificates :beginner: Force all connections over TLS Rationale You should always use HTTPS instead of HTTP to protect your website, even if it doesn’t handle sensitive communications. Example server { listen 10.240.20.2:80; server_name domain.com; return 301 https://$host$request_uri; server { listen 10.240.20.2:443 ssl; server_name domain.com; External resources Should we force user to HTTPS on website? :beginner: Use geo/map modules instead allow/deny Rationale Creates variables with values depending on the client IP address. Use map or geo modules (one of them) to prevent users abusing your servers. Example # Map module: map $remote_addr $globals_internal_map_acl { # Status code: # - 0 = false # - 1 = true default 0; ### INTERNAL ### 10.255.10.0/24 1; 10.255.20.0/24 1; 10.255.30.0/24 1; 192.168.0.0/16 1; # Geo module: geo $globals_internal_geo_acl { # Status code: # - 0 = false # - 1 = true default 0; ### INTERNAL ### 10.255.10.0/24 1; 10.255.20.0/24 1; 10.255.30.0/24 1; 192.168.0.0/16 1; External resources Nginx Basic Configuration (Geo Ban) :beginner: Map all the things... Rationale Map module provides a more elegant solution for clearly parsing a big list of regexes, e.g. User-Agents. Manage a large number of redirects with Nginx maps. Example map $http_user_agent $device_redirect { default "desktop"; ~(?i)ip(hone|od) "mobile"; ~(?i)android.*(mobile|mini) "mobile"; ~Mobile.+Firefox "mobile"; ~^HTC "mobile"; ~Fennec "mobile"; ~IEMobile "mobile"; ~BB10 "mobile"; ~SymbianOS.*AppleWebKit "mobile"; ~Opera\sMobi "mobile"; if ($device_redirect = "mobile") { return 301 https://m.domain.com$request_uri; External resources Cool Nginx feature of the week :beginner: Drop the same root inside location block Rationale If you add a root to every location block then a location block that isn’t matched will have no root. Set global root inside server directive. Example server { server_name domain.com; root /var/www/domain.com/public; location / { location /api { location /static { root /var/www/domain.com/static; External resources Nginx Pitfalls: Root inside location block :beginner: Use debug mode for debugging Rationale The error_log directive is part of the core module. There's probably more detail than you want, but that can sometimes be a lifesaver (but log file growing rapidly on a very high-traffic sites). Example rewrite_log on; error_log /var/log/nginx/error-debug.log debug; External resources A debugging log :beginner: Use custom log formats Rationale The access_log directive is part of the HttpLogModule. Anything you can access as a variable in nginx config, you can log, including non-standard http headers, etc. so it's a simple way to create your own log format for specific situations. Example # Default main log format from nginx repository: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Extended main log format: log_format main-level-0 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time'; # Debug log formats: log_format debug-level-0 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '$request_id $pid $msec $request_time ' '$upstream_connect_time $upstream_header_time ' '$upstream_response_time "$request_filename" ' '$request_completion'; log_format debug-level-1 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '$request_id $pid $msec $request_time ' '$upstream_connect_time $upstream_header_time ' '$upstream_response_time "$request_filename" $request_length ' '$request_completion $connection $connection_requests'; log_format debug-level-2 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '$request_id $pid $msec $request_time ' '$upstream_connect_time $upstream_header_time ' '$upstream_response_time "$request_filename" $request_length ' '$request_completion $connection $connection_requests ' '$server_addr $server_port $remote_addr $remote_port'; External resources Module ngx_http_log_module Nginx: Custom access log format and error levels nginx: Log complete request/response with all headers? Performance :beginner: Adjust worker processes Rationale The worker_processes directive is the sturdy spine of life for Nginx. This directive is responsible for letting our virtual server know many workers to spawn once it has become bound to the proper IP and port(s). Official Nginx documentation say: When one is in doubt, setting it to the number of available CPU cores would be a good start (the value "auto" will try to autodetect it). I think for high load proxy servers (also standalone servers) good value is ALL_CORES - 1 (please test it before used). Example # VCPU = 4 , expr $(nproc --all) - 1 worker_processes 3; External resources Nginx Core Module - worker_processes :beginner: Use HTTP/2 Rationale HTTP/2 will make our applications faster, simpler, and more robust. The primary goals for HTTP/2 are to reduce latency by enabling full request and response multiplexing, minimize protocol overhead via efficient compression of HTTP header fields, and add support for request prioritization and server push. HTTP/2 is backwards-compatible with HTTP/1.1, so it would be possible to ignore it completely and everything will continue to work as before. Example # For https: server { listen 10.240.20.2:443 ssl http2; External resources Introduction to HTTP/2 What is HTTP/2 - The Ultimate Guide The HTTP/2 Protocol: Its Pros & Cons and How to Start Using It :beginner: Maintaining SSL Sessions Rationale This improves performance from the clients’ perspective, because it eliminates the need for a new (and time-consuming) SSL handshake to be conducted each time a request is made. Most servers do not purge sessions or ticket keys, thus increasing the risk that a server compromise would leak data from previous (and future) connections. Example ssl_session_cache shared:SSL:10m; ssl_session_timeout 24h; ssl_session_tickets off; ssl_buffer_size 1400; External resources SSL Session (cache) Speeding up TLS: enabling session reuse :beginner: Use exact names where possible Rationale Exact names, wildcard names starting with an asterisk, and wildcard names ending with an asterisk are stored in three hash tables bound to the listen ports. The exact names hash table is searched first. If a name is not found, the hash table with wildcard names starting with an asterisk is searched. If the name is not found there, the hash table with wildcard names ending with an asterisk is searched. Searching wildcard names hash table is slower than searching exact names hash table because names are searched by domain parts. Regular expressions are tested sequentially and therefore are the slowest method and are non-scalable. For these reasons, it is better to use exact names where possible. Example # It is more efficient to define them explicitly: server { listen 80; server_name example.org www.example.org *.example.org; # than to use the simplified form: server { listen 80; server_name .example.org; External resources Server names Hardening :beginner: Run as an unprivileged user Rationale There is no real difference in security just by changing the process owner name. On the other hand in security, the principle of least privilege states that an entity should be given no more permission than necessary to accomplish its goals within a given system. This way only master process runs as root. Example # Edit nginx.conf: user www-data; # Set owner and group for root (app, default) directory: chown -R www-data:www-data /var/www/domain.com External resources Why does nginx starts process as root? :beginner: Disable unnecessary modules Rationale It is recommended to disable any modules which are not required as this will minimize the risk of any potential attacks by limiting the operations allowed by the web server. Example # During installation: ./configure --without-http_autoindex_module # Comment modules in the configuration file e.g. modules.conf: # load_module /usr/share/nginx/modules/ndk_http_module.so; # load_module /usr/share/nginx/modules/ngx_http_auth_pam_module.so; # load_module /usr/share/nginx/modules/ngx_http_cache_purge_module.so; # load_module /usr/share/nginx/modules/ngx_http_dav_ext_module.so; load_module /usr/share/nginx/modules/ngx_http_echo_module.so; # load_module /usr/share/nginx/modules/ngx_http_fancyindex_module.so; load_module /usr/share/nginx/modules/ngx_http_geoip_module.so; load_module /usr/share/nginx/modules/ngx_http_headers_more_filter_module.so; # load_module /usr/share/nginx/modules/ngx_http_image_filter_module.so; # load_module /usr/share/nginx/modules/ngx_http_lua_module.so; load_module /usr/share/nginx/modules/ngx_http_perl_module.so; # load_module /usr/share/nginx/modules/ngx_mail_module.so; # load_module /usr/share/nginx/modules/ngx_nchan_module.so; # load_module /usr/share/nginx/modules/ngx_stream_module.so; External resources nginx-modules :beginner: Protect sensitive resources Rationale Hidden directories and files should never be web accessible. Example if ($request_uri ~ "/\.git") { return 403; location ~ /\.git { deny all; location ~* ^.*(\.(?:git|svn|htaccess))$ { return 403; # or all . directories/files in general (but remember about .well-known path) location ~ /\. { deny all; External resources Hidden directories and files as a source of sensitive information about web application :beginner: Hide Nginx version number Rationale Disclosing the version of Nginx running can be undesirable, particularly in environments sensitive to information disclosure. The "Official Apache Documentation (Apache Core Features)" say: Setting ServerTokens to less than minimal is not recommended because it makes it more difficult to debug interoperational problems. Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of "security through obscurity" is a myth and leads to a false sense of safety. Example server_tokens off; External resources Remove Version from Server Header Banner in nginx Reduce or remove server headers :beginner: Hide Nginx server signature Rationale In my opinion there is no real reason or need to show this much information about your server. It is easy to look up particular vulnerabilities once you know the version number. You should compile Nginx from sources with ngx_headers_more to used more_set_headers directive. Example more_set_headers "Server: Unknown"; External resources Shhh... don’t let your response headers talk too loudly How to change (hide) the Nginx Server Signature? :beginner: Hide upstream proxy headers Rationale When Nginx is used to proxy requests to an upstream server (such as a PHP-FPM instance), it can be beneficial to hide certain headers sent in the upstream response (for example, the version of PHP running). Example proxy_hide_header X-Powered-By; proxy_hide_header X-AspNetMvc-Version; proxy_hide_header X-AspNet-Version; proxy_hide_header X-Drupal-Cache; External resources Remove insecure http headers :beginner: Use only 4096-bit private keys Rationale Advisories recommend 2048 for now. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030. Generally there is no compelling reason to choose 4096 bit keys over 2048 provided you use sane expiration intervals. If you want to get A+ with 100%s on SSL Lab you should definitely use 4096 bit private key. I always generate 4096 bit keys for low busy sites since the downside is minimal (slightly lower performance) and security is slightly higher (although not as high as one would like). Use of alternative solution: ECC Certificate Signing Request (CSR). The "SSL/TLS Deployment Best Practices" book say: The cryptographic handshake, which is used to establish secure connections, is an operation whose cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in “too much” security and slow operation. For most web sites, using RSA keys stronger than 2048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2048 bits for DHE and 256 bits for ECDHE. Konstantin Ryabitsev (Reddit): Generally speaking, if we ever find ourselves in a world where 2048-bit keys are no longer good enough, it won't be because of improvements in brute-force capabilities of current computers, but because RSA will be made obsolete as a technology due to revolutionary computing advances. If that ever happens, 3072 or 4096 bits won't make much of a difference anyway. This is why anything above 2048 bits is generally regarded as a sort of feel-good hedging theatre. Example ### Example (RSA): ( _fd="domain.com.key" ; _len="4096" ; openssl genrsa -out ${_fd} ${_len} ) # Let's Encrypt: certbot certonly -d domain.com -d www.domain.com --rsa-key-size 4096 ### Example (ECC): # _curve: prime256v1, secp521r1, secp384r1 ( _fd="domain.com.key" ; _fd_csr="domain.com.csr" ; _curve="prime256v1" ; \ openssl ecparam -out ${_fd} -name ${_curve} -genkey ; openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 ) # Let's Encrypt (from above): certbot --csr ${_fd_csr} -[other-args] For x25519: ( _fd="private.key" ; _curve="x25519" ; \ openssl genpkey -algorithm ${_curve} -out ${_fd} ) ssllabs score: 100 ( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} ) # Let's Encrypt: certbot certonly -d domain.com -d www.domain.com ssllabs score: 90 External resources So you're making an RSA key for an HTTPS certificate. What key size do you use? :beginner: Keep only TLS 1.2 (+ TLS 1.3) Rationale It is recommended to run TLS 1.1/1.2 and fully disable SSLv2, SSLv3 and TLS 1.0 that have protocol weaknesses. TLS 1.1 and 1.2 are both without security issues - but only v1.2 provides modern cryptographic algorithms. TLS 1.0 and TLS 1.1 protocols will be removed from browsers at the beginning of 2020. Example ssl_protocols TLSv1.2; # For TLS 1.3 ssl_protocols TLSv1.2 TLSv1.3; ssllabs score: 100 ssl_protocols TLSv1.2 TLSv1.1; ssllabs score: 95 External resources TLS/SSL Explained – Examples of a TLS Vulnerability and Attack, Final Part Deprecating TLS 1.0 and 1.1 - Enhancing Security for Everyone TLS1.3 - OpenSSLWiki How to enable TLS 1.3 on Nginx :beginner: Use only strong ciphers Rationale This parameter changes quite often, the recommended configuration for today may be out of date tomorrow. For more security use only strong and not vulnerable ciphersuite (but if you use http/2 you can get Server sent fatal alert: handshake_failure error). Place ECDHE and DHE suites at the top of your list. The order is important; because ECDHE suites are faster, you want to use them whenever clients supports them. For backward compatibility software components you should use less restrictive ciphers. You should definitely disable weak ciphers like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name. Example ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384"; ssllabs score: 100 ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256"; ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:!AES256-GCM-SHA256:!AES256-GCM-SHA128:!aNULL:!MD5"; ssllabs score: 90 Ciphersuite for TLS 1.3: ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256"; External resources SSL/TLS: How to choose your cipher suite HTTP/2 and ECDSA Cipher Suites Which SSL/TLS Protocol Versions and Cipher Suites Should I Use? Why use Ephemeral Diffie-Hellman Differences between TLS 1.2 and TLS 1.3 :beginner: Use more secure ECDH Curve Rationale For a SSL server certificate, an "elliptic curve" certificate will be used only with digital signatures (ECDSA algorithm). x25519 is a more secure but slightly less compatible option. To maximise interoperability with existing browsers and servers, stick to P-256 prime256v1 and P-384 secp384r1 curves. NSA Suite B says that NSA uses curves P-256 and P-384 (in OpenSSL, they are designated as, respectively, "prime256v1" and "secp384r1"). There is nothing wrong with P-521, except that it is, in practice, useless. Arguably, P-384 is also useless, because the more efficient P-256 curve already provides security that cannot be broken through accumulation of computing power. Use P-256 to minimize trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then use P-384: it will increases your computational and network costs. If you do not set ssh_ecdh_curve, then the Nginx will use its default settings, e.g. chrome will prefer x25519, but this is not recommended because you can not control the Nginx's default settings (seems to be P-256). Explicitly set ssh_ecdh_curve X25519:prime256v1:secp521r1:secp384r1; decreases the Key Exchange SSL Labs rating. Definitely do not use the secp112r1, secp112r2, secp128r1, secp128r2, secp160k1, secp160r1, secp160r2, secp192k1 curves. They have a too small size for security application according to NIST recommendation. Example ssl_ecdh_curve secp521r1:secp384r1; ssllabs score: 100 # Alternative (this one doesn’t affect compatibility, by the way; it’s just a question of the preferred order). This setup downgrade Key Exchange score: ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1; External resources Standards for Efficient Cryptography Group SafeCurves: choosing safe curves for elliptic-curve cryptography P-521 is pretty nice prime Safe ECC curves for HTTPS are coming sooner than you think Cryptographic Key Length Recommendations Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001)) Elliptic Curve performance: NIST vs Brainpool Which elliptic curve should I use? :beginner: Use strong Key Exchange Rationale The DH key is only used if DH ciphers are used. Modern clients prefer ECDHE instead and if your Nginx accepts this preference then the handshake will not use the DH param at all since it will not do a DHE key exchange but an ECDHE key exchange. Most of the "modern" profiles from places like Mozilla's ssl config generator no longer recommend using this. Default key size in OpenSSL is 1024 bits - it's vulnerable and breakable. For the best security configuration use your own 4096 bit DH Group or use known safe ones pre-defined DH groups (it's recommended) from mozilla. Example # To generate a DH key: openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096 # To produce "DSA-like" DH parameters: openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_4096.pem 4096 # To generate a ECDH key: openssl ecparam -out /etc/nginx/ssl/ecparam.pem -name prime256v1 # Nginx configuration: ssl_dhparam /etc/nginx/ssl/dhparams_4096.pem; ssllabs score: 100 External resources Weak Diffie-Hellman and the Logjam Attack Guide to Deploying Diffie-Hellman for TLS Pre-defined DHE groups Instructs OpenSSL to produce "DSA-like" DH parameters OpenSSL generate different types of self signed certificate :beginner: Defend against the BEAST attack Rationale Enables server-side protection from BEAST attacks. Example ssl_prefer_server_ciphers on; External resources Is BEAST still a threat? :beginner: Disable HTTP compression (mitigation of CRIME/BREACH attacks) Rationale You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways. Disabling SSL/TLS compression stops the attack very effectively. Some attacks are possible because of gzip (HTTP compression not TLS compression) being enabled on SSL requests. In most cases, the best action is to simply disable gzip for SSL. You shouldn't use HTTP compression on private responses when using TLS. Compression can be (i think) okay to HTTP compress publicly available static content like css or js and HTML content with zero sensitive info (like an "About Us" page). Example gzip off; External resources Is HTTP compression safe? HTTP compression continues to put encrypted communications at risk SSL/TLS attacks: Part 2 – CRIME Attack To avoid BREACH, can we use gzip on non-token responses? :beginner: HTTP Strict Transport Security Rationale The header indicates for how long a browser should unconditionally refuse to take part in unsecured HTTP connection for a specific domain. Example add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always; ssllabs score: A+ External resources HTTP Strict Transport Security Cheat Sheet :beginner: Reduce XSS risks (Content-Security-Policy) Rationale CSP reduce the risk and impact of XSS attacks in modern browsers. Example # This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load. add_header Content-Security-Policy "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" always; External resources Content Security Policy (CSP) Quick Reference Guide Content Security Policy – OWASP :beginner: Control the behavior of the Referer header (Referrer-Policy) Rationale Determine what information is sent along with the requests. Example add_header Referrer-Policy "no-referrer"; External resources A new security header: Referrer Policy :beginner: Provide clickjacking protection (X-Frame-Options) Rationale Helps to protect your visitors against clickjacking attacks. It is recommended that you use the x-frame-options header on pages which should not be allowed to render a page in a frame. Example add_header X-Frame-Options "SAMEORIGIN" always; External resources Clickjacking Defense Cheat Sheet :beginner: Prevent some categories of XSS attacks (X-XSS-Protection) Rationale Enable the cross-site scripting (XSS) filter built into modern web browsers. Example add_header X-XSS-Protection "1; mode=block" always; External resources X-XSS-Protection HTTP Header :beginner: Prevent Sniff Mimetype middleware (X-Content-Type-Options) Rationale It prevents the browser from doing MIME-type sniffing (prevents "mime" based attacks). Example add_header X-Content-Type-Options "nosniff" always; External resources X-Content-Type-Options HTTP Header :beginner: Deny the use of browser features (Feature-Policy) Rationale This header protects your site from third parties using APIs that have security and privacy implications, and also from your own team adding outdated APIs or poorly optimized images. Example add_header Feature-Policy "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; camera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;"; External resources Feature Policy Explainer Policy Controlled Features :beginner: Reject unsafe HTTP methods Rationale Set of methods support by a resource. An ordinary web server supports the HEAD, GET and POST methods to retrieve static and dynamic content. Other (e.g. OPTIONS, TRACE) methods should not be supported on public web servers, as they increase the attack surface. Example add_header Allow "GET, POST, HEAD" always; if ($request_method !~ ^(GET|POST|HEAD)$) { return 405; External resources Vulnerability name: Unsafe HTTP methods :beginner: Control Buffer Overflow attacks Rationale Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in Nginx we can set buffer size limitations for all clients. Example client_body_buffer_size 100k; client_header_buffer_size 1k; client_max_body_size 100k; large_client_header_buffers 2 1k; External resources SCG WS nginx :beginner: Mitigating Slow HTTP DoS attack (Closing Slow Connections) Rationale Close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible. Example client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 5s 5s; send_timeout 10s; External resources Mitigating DDoS Attacks with NGINX and NGINX Plus SCG WS nginx Configuration examples Remember to make a copy of the current configuration and all files/directories. Nginx Contexts Before read this configuration remember about Nginx Contexts structure: Core Contexts Global/Main Context Events Context HTTP Context Server Context Location Context Upstream Context Mail Context Reverse Proxy This chapter describes the basic configuration of my proxy server (for blkcipher.info domain). Import configuration It's very simple - clone the repo and perform full directory sync: git clone https://github.com/trimstray/nginx-quick-reference.git rsync -avur --delete lib/nginx/ /etc/nginx/ For leaving your configuration (not recommended) remove --delete rsync param. Set bind IP address Find and replace 192.168.252.2 string in directory and file names cd /etc/nginx find . -depth -name '*192.168.252.2*' -execdir bash -c 'mv -v "$1" "${1//192.168.252.2/xxx.xxx.xxx.xxx}"' _ {} \; Find and replace 192.168.252.2 string in configuration files cd /etc/nginx find . -type f -print0 | xargs -0 sed -i 's/192.168.252.2/xxx.xxx.xxx.xxx/g' Set your domain name Find and replace blkcipher.info string in directory and file names cd /etc/nginx find . -depth -name '*blkcipher.info*' -execdir bash -c 'mv -v "$1" "${1//blkcipher.info/example.com}"' _ {} \; Find and replace blkcipher.info string in configuration files cd /etc/nginx find . -type f -print0 | xargs -0 sed -i 's/blkcipher_info/example_com/g' find . -type f -print0 | xargs -0 sed -i 's/blkcipher.info/example.com/g' Regenerate private keys and certs For localhost cd /etc/nginx/master/_server/localhost/certs # Private key + Self-signed certificate ( _fd="localhost.key" ; _fd_crt="nginx_localhost_bundle.crt" ; \ openssl req -x509 -newkey rsa:4096 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \ -subj "/C=X0/ST=localhost/L=localhost/O=localhost/OU=X00/CN=localhost" ) For default_server cd /etc/nginx/master/_server/defaults/certs # Private key + Self-signed certificate ( _fd="defaults.key" ; _fd_crt="nginx_defaults_bundle.crt" ; \ openssl req -x509 -newkey rsa:4096 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \ -subj "/C=X1/ST=default/L=default/O=default/OU=X11/CN=default_server" ) For your domain (e.g. Let's Encrypt) cd /etc/nginx/master/_server/example.com/certs # For multidomain: certbot certonly -d example.com -d www.example.com --rsa-key-size 4096 # For wildcard: certbot certonly --manual --preferred-challenges=dns -d example.com -d *.example.com --rsa-key-size 4096 # Copy private key and chain: cp /etc/letsencrypt/live/example.com/fullchain.pem nginx_example.com_bundle.crt cp /etc/letsencrypt/live/example.com/privkey.pem example.com.key Add new domain Updated nginx.conf # At the end of the file (in 'IPS/DOMAINS' section) include /etc/nginx/master/_server/domain.com/servers.conf; include /etc/nginx/master/_server/domain.com/backends.conf; Init domain directory cd /etc/nginx/cd master/_server cp -R example.com domain.com cd domain.com find . -depth -name '*example.com*' -execdir bash -c 'mv -v "$1" "${1//example.com/domain.com}"' _ {} \; find . -type f -print0 | xargs -0 sed -i 's/example_com/domain_com/g' find . -type f -print0 | xargs -0 sed -i 's/example.com/domain.com/g' Test your configuration nginx -t -c /etc/nginx/nginx.conf
$ tar tvf archive_name.tar More tar examples: The Ultimate Tar Command Tutorial with 10 Practical Examples 2. grep command examples Search for a given string in a file (case in-sensitive search). $ grep -i "the" demo_file Print the matched line, along with the 3 lines after it. $ grep -A 3 -i "example" demo_text Search for a given string in all files recursively $ grep -r "ramesh" * More grep examples: Get a Grip on the Grep! – 15 Practical Grep Command Examples 3. find command examples Find files using file-name ( case in-sensitve find) # find -iname "MyCProgram.c" Execute commands on files found by the find command $ find -iname "MyCProgram.c" -exec md5sum {} \; Find all empty files in home directory # find ~ -empty More find examples: Mommy, I found it! — 15 Practical Linux Find Command Examples 4. ssh command examples Login to remote host ssh -l jsmith remotehost.example.com Debug ssh client ssh -v -l jsmith remotehost.example.com Display ssh client version $ ssh -V OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003 More ssh examples: 5 Basic Linux SSH Client Commands 5. sed command examples When you copy a DOS file to Unix, you could find \r\n in the end of each line. This example converts the DOS file format to Unix file format using sed command. $sed 's/.$//' filename Print file content in reverse order $ sed -n '1!G;h;$p' thegeekstuff.txt Add line number for all non-empty-lines in a file $ sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /' More sed examples: Advanced Sed Substitution Examples 6. awk command examples Remove duplicate lines using awk $ awk '!($0 in array) { array[$0]; print }' temp Print all lines from /etc/passwd that has the same uid and gid $awk -F ':' '$3==$4' passwd.txt Print only specific field from a file. $ awk '{print $2,$5;}' employee.txt More awk examples: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR 7. vim command examples Go to the 143rd line of file $ vim +143 filename.txt Go to the first match of the specified $ vim +/search-term filename.txt Open the file in read only mode. $ vim -R /etc/passwd More vim examples: How To Record and Play in Vim Editor 8. diff command examples Ignore white space while comparing. # diff -w name_list.txt name_list_new.txt 2c2,3 < John Doe --- > John M Doe > Jason Bourne More diff examples: Top 4 File Difference Tools on UNIX / Linux – Diff, Colordiff, Wdiff, Vimdiff 9. sort command examples Sort a file in ascending order $ sort names.txt Sort a file in descending order $ sort -r names.txt Sort passwd file by 3rd field. $ sort -t: -k 3n /etc/passwd | more 10. export command examples To view oracle related environment variables. $ export | grep ORACLE declare -x ORACLE_BASE="/u01/app/oracle" declare -x ORACLE_HOME="/u01/app/oracle/product/10.2.0" declare -x ORACLE_SID="med" declare -x ORACLE_TERM="xterm" To export an environment variable: $ export ORACLE_HOME=/u01/app/oracle/product/10.2.0 11. xargs command examples Copy all images to external hard-drive # ls *.jpg | xargs -n1 -i cp {} /external-hard-drive/directory Search all jpg images in the system and archive it. # find / -name *.jpg -type f -print | xargs tar -cvzf images.tar.gz Download all the URLs mentioned in the url-list.txt file # cat url-list.txt | xargs wget –c 12. ls command examples Display filesize in human readable format (e.g. KB, MB etc.,) $ ls -lh -rw-r----- 1 ramesh team-dev 8.9M Jun 12 15:27 arch-linux.txt.gz Order Files Based on Last Modified Time (In Reverse Order) Using ls -ltr $ ls -ltr Visual Classification of Files With Special Characters Using ls -F $ ls -F More ls examples: Unix LS Command: 15 Practical Examples 13. pwd command pwd is Print working directory. What else can be said about the good old pwd who has been printing the current directory name for ages. 14. cd command examples Use “cd -” to toggle between the last two directories Use “shopt -s cdspell” to automatically correct mistyped directory names on cd More cd examples: 6 Awesome Linux cd command Hacks 15. gzip command examples To create a *.gz compressed file: $ gzip test.txt To uncompress a *.gz file: $ gzip -d test.txt.gz Display compression ratio of the compressed file using gzip -l $ gzip -l *.gz compressed uncompressed ratio uncompressed_name 23709 97975 75.8% asp-patch-rpms.txt 16. bzip2 command examples To create a *.bz2 compressed file: $ bzip2 test.txt To uncompress a *.bz2 file: bzip2 -d test.txt.bz2 More bzip2 examples: BZ is Eazy! bzip2, bzgrep, bzcmp, bzdiff, bzcat, bzless, bzmore examples 17. unzip command examples To extract a *.zip compressed file: $ unzip test.zip View the contents of *.zip file (Without unzipping it): $ unzip -l jasper.zip Archive: jasper.zip Length Date Time Name -------- ---- ---- ---- 40995 11-30-98 23:50 META-INF/MANIFEST.MF 32169 08-25-98 21:07 classes_ 15964 08-25-98 21:07 classes_names 10542 08-25-98 21:07 classes_ncomp 18. shutdown command examples Shutdown the system and turn the power off immediately. # shutdown -h now Shutdown the system after 10 minutes. # shutdown -h +10 Reboot the system using shutdown command. # shutdown -r now Force the filesystem check during reboot. # shutdown -Fr now 19. ftp command examples Both ftp and secure ftp (sftp) has similar commands. To connect to a remote server and download multiple files, do the following. $ ftp IP/hostname ftp> mget *.html To view the file names located on the remote server before downloading, mls ftp command as shown below. ftp> mls *.html - /ftptest/features.html /ftptest/index.html /ftptest/othertools.html /ftptest/samplereport.html /ftptest/usage.html More ftp examples: FTP and SFTP Beginners Guide with 10 Examples 20. crontab command examples View crontab entry for a specific user # crontab -u john -l Schedule a cron job every 10 minutes. */10 * * * * /home/ramesh/check-disk-space More crontab examples: Linux Crontab: 15 Awesome Cron Job Examples 21. service command examples Service command is used to run the system V init scripts. i.e Instead of calling the scripts located in the /etc/init.d/ directory with their full path, you can use the service command. Check the status of a service: # service ssh status Check the status of all the services. service --status-all Restart a service. # service ssh restart 22. ps command examples ps command is used to display information about the processes that are running in the system. While there are lot of arguments that could be passed to a ps command, following are some of the common ones. To view current running processes. $ ps -ef | more To view current running processes in a tree structure. H option stands for process hierarchy. $ ps -efH | more 23. free command examples This command is used to display the free, used, swap memory available in the system. Typical free command output. The output is displayed in bytes. $ free total used free shared buffers cached Mem: 3566408 1580220 1986188 0 203988 902960 -/+ buffers/cache: 473272 3093136 Swap: 4000176 0 4000176 If you want to quickly check how many GB of RAM your system has use the -g option. -b option displays in bytes, -k in kilo bytes, -m in mega bytes. $ free -g total used free shared buffers cached Mem: 3 1 1 0 0 0 -/+ buffers/cache: 0 2 Swap: 3 0 3 If you want to see a total memory ( including the swap), use the -t switch, which will display a total line as shown below. ramesh@ramesh-laptop:~$ free -t total used free shared buffers cached Mem: 3566408 1592148 1974260 0 204260 912556 -/+ buffers/cache: 475332 3091076 Swap: 4000176 0 4000176 Total: 7566584 1592148 5974436 24. top command examples top command displays the top processes in the system ( by default sorted by cpu usage ). To sort top output by any column, Press O (upper-case O) , which will display all the possible columns that you can sort by as shown below. Current Sort Field: P for window 1:Def Select sort field via field letter, type any other key to return a: PID = Process Id v: nDRT = Dirty Pages count d: UID = User Id y: WCHAN = Sleeping in Function e: USER = User Name z: Flags = Task Flags ........ To displays only the processes that belong to a particular user use -u option. The following will show only the top processes that belongs to oracle user. $ top -u oracle More top examples: Can You Top This? 15 Practical Linux Top Command Examples 25. df command examples Displays the file system disk space usage. By default df -k displays output in bytes. $ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 29530400 3233104 24797232 12% / /dev/sda2 120367992 50171596 64082060 44% /home df -h displays output in human readable form. i.e size will be displayed in GB’s. ramesh@ramesh-laptop:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 29G 3.1G 24G 12% / /dev/sda2 115G 48G 62G 44% /home Use -T option to display what type of file system. ramesh@ramesh-laptop:~$ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/sda1 ext4 29530400 3233120 24797216 12% / /dev/sda2 ext4 120367992 50171596 64082060 44% /home 26. kill command examples Use kill command to terminate a process. First get the process id using ps -ef command, then use kill -9 to kill the running Linux process as shown below. You can also use killall, pkill, xkill to terminate a unix process. $ ps -ef | grep vim ramesh 7243 7222 9 22:43 pts/2 00:00:00 vim $ kill -9 7243 More kill examples: 4 Ways to Kill a Process – kill, killall, pkill, xkill 27. rm command examples Get confirmation before removing the file. $ rm -i filename.txt It is very useful while giving shell metacharacters in the file name argument. Print the filename and get confirmation before removing the file. $ rm -i file* Following example recursively removes all files and directories under the example directory. This also removes the example directory itself. $ rm -r example 28. cp command examples Copy file1 to file2 preserving the mode, ownership and timestamp. $ cp -p file1 file2 Copy file1 to file2. if file2 exists prompt for confirmation before overwritting it. $ cp -i file1 file2 29. mv command examples Rename file1 to file2. if file2 exists prompt for confirmation before overwritting it. $ mv -i file1 file2 Note: mv -f is just the opposite, which will overwrite file2 without prompting. mv -v will print what is happening during file rename, which is useful while specifying shell metacharacters in the file name argument. $ mv -v file1 file2 30. cat command examples You can view multiple files at the same time. Following example prints the content of file1 followed by file2 to stdout. $ cat file1 file2 While displaying the file, following cat -n command will prepend the line number to each line of the output. $ cat -n /etc/logrotate.conf 1 /var/log/btmp { 2 missingok 3 monthly 4 create 0660 root utmp 5 rotate 1 31. mount command examples To mount a file system, you should first create a directory and mount it as shown below. # mkdir /u01 # mount /dev/sdb1 /u01 You can also add this to the fstab for automatic mounting. i.e Anytime system is restarted, the filesystem will be mounted. /dev/sdb1 /u01 ext2 defaults 0 2 32. chmod command examples chmod command is used to change the permissions for a file or directory. Give full access to user and group (i.e read, write and execute ) on a specific file. $ chmod ug+rwx file.txt Revoke all access for the group (i.e read, write and execute ) on a specific file. $ chmod g-rwx file.txt Apply the file permissions recursively to all the files in the sub-directories. $ chmod -R ug+rwx file.txt More chmod examples: 7 Chmod Command Examples for Beginners 33. chown command examples chown command is used to change the owner and group of a file. \ To change owner to oracle and group to db on a file. i.e Change both owner and group at the same time. $ chown oracle:dba dbora.sh Use -R to change the ownership recursively. $ chown -R oracle:dba /home/oracle 34. passwd command examples Change your password from command line using passwd. This will prompt for the old password followed by the new password. $ passwd Super user can use passwd command to reset others password. This will not prompt for current password of the user. # passwd USERNAME Remove password for a specific user. Root user can disable password for a specific user. Once the password is disabled, the user can login without entering the password. # passwd -d USERNAME 35. mkdir command examples Following example creates a directory called temp under your home directory. $ mkdir ~/temp Create nested directories using one mkdir command. If any of these directories exist already, it will not display any error. If any of these directories doesn’t exist, it will create them. $ mkdir -p dir1/dir2/dir3/dir4/ 36. ifconfig command examples Use ifconfig command to view or configure a network interface on the Linux system. View all the interfaces along with status. $ ifconfig -a Start or stop a specific interface using up and down command as shown below. $ ifconfig eth0 up $ ifconfig eth0 down More ifconfig examples: Ifconfig: 7 Examples To Configure Network Interface 37. uname command examples Uname command displays important information about the system such as — Kernel name, Host name, Kernel release number, Processor type, etc., Sample uname output from a Ubuntu laptop is shown below. $ uname -a Linux john-laptop 2.6.32-24-generic #41-Ubuntu SMP Thu Aug 19 01:12:52 UTC 2010 i686 GNU/Linux 38. whereis command examples When you want to find out where a specific Unix command exists (for example, where does ls command exists?), you can execute the following command. $ whereis ls ls: /bin/ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz When you want to search an executable from a path other than the whereis default path, you can use -B option and give path as argument to it. This searches for the executable lsmk in the /tmp directory, and displays it, if it is available. $ whereis -u -B /tmp -f lsmk lsmk: /tmp/lsmk 39. whatis command examples Whatis command displays a single line description about a command. $ whatis ls ls (1) - list directory contents $ whatis ifconfig ifconfig (8) - configure a network interface 40. locate command examples Using locate command you can quickly search for the location of a specific file (or group of files). Locate command uses the database created by updatedb. The example below shows all files in the system that contains the word crontab in it. $ locate crontab /etc/anacrontab /etc/crontab /usr/bin/crontab /usr/share/doc/cron/examples/crontab2english.pl.gz /usr/share/man/man1/crontab.1.gz /usr/share/man/man5/anacrontab.5.gz /usr/share/man/man5/crontab.5.gz /usr/share/vim/vim72/syntax/crontab.vim 41. man command examples Display the man page of a specific command. $ man crontab When a man page for a command is located under more than one section, you can view the man page for that command from a specific section as shown below. $ man SECTION-NUMBER commandname Following 8 sections are available in the man page. General commands System calls C library functions Special files (usually devices, those found in /dev) and drivers File formats and conventions Games and screensavers Miscellaneous System administration commands and daemons For example, when you do whatis crontab, you’ll notice that crontab has two man pages (section 1 and section 5). To view section 5 of crontab man page, do the following. $ whatis crontab crontab (1) - maintain crontab files for individual users (V3) crontab (5) - tables for driving cron $ man 5 crontab 42. tail command examples Print the last 10 lines of a file by default. $ tail filename.txt Print N number of lines from the file named filename.txt $ tail -n N filename.txt View the content of the file in real time using tail -f. This is useful to view the log files, that keeps growing. The command can be terminated using CTRL-C. $ tail -f log-file More tail examples: 3 Methods To View tail -f output of Multiple Log Files in One Terminal 43. less command examples less is very efficient while viewing huge log files, as it doesn’t need to load the full file while opening. $ less huge-log-file.log One you open a file using less command, following two keys are very helpful. CTRL+F – forward one window CTRL+B – backward one window More less examples: Unix Less Command: 10 Tips for Effective Navigation 44. su command examples Switch to a different user account using su command. Super user can switch to any other user without entering their password. $ su - USERNAME Execute a single command from a different account name. In the following example, john can execute the ls command as raj username. Once the command is executed, it will come back to john’s account. [john@dev-server]$ su - raj -c 'ls' [john@dev-server]$ Login to a specified user account, and execute the specified shell instead of the default shell. $ su -s 'SHELLNAME' USERNAME 45. mysql command examples mysql is probably the most widely used open source database on Linux. Even if you don’t run a mysql database on your server, you might end-up using the mysql command ( client ) to connect to a mysql database running on the remote server. To connect to a remote mysql database. This will prompt for a password. $ mysql -u root -p -h 192.168.1.2 To connect to a local mysql database. $ mysql -u root -p If you want to specify the mysql root password in the command line itself, enter it immediately after -p (without any space). 46. yum command examples To install apache using yum. $ yum install httpd To upgrade apache using yum. $ yum update httpd To uninstall/remove apache using yum. $ yum remove httpd 47. rpm command examples To install apache using rpm. # rpm -ivh httpd-2.2.3-22.0.1.el5.i386.rpm To upgrade apache using rpm. # rpm -uvh httpd-2.2.3-22.0.1.el5.i386.rpm To uninstall/remove apache using rpm. # rpm -ev httpd More rpm examples: RPM Command: 15 Examples to Install, Uninstall, Upgrade, Query RPM Packages 48. ping command examples Ping a remote host by sending only 5 packets. $ ping -c 5 gmail.com More ping examples: Ping Tutorial: 15 Effective Ping Command Examples 49. date command examples Set the system date: # date -s "01/31/2010 23:59:53" Once you’ve changed the system date, you should syncronize the hardware clock with the system date as shown below. # hwclock –systohc # hwclock --systohc –utc 50. wget command examples The quick and effective method to download software, music, video from internet is using wget command. $ wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.1.tar.gz Download and store it with a different name. $ wget -O taglist.zip http://www.vim.org/scripts/download_script.php?src_id=7701
如果应用系统出现异常,一般都会在业务日志中体现 统计当天业务日志中ERROR出现数量:egrep ERROR --color logname | wc -l ,如果错误数量过大,一般都是有问题的 查看日志中ERROR后10行具体报错:egrep -A 10 ERROR logname | less ,或 -C 10 查看ERROR前后10行日志 Java中,所有异常都继承自Throwable类(一个完整可用的类)。整体上分为Error、Exception两个大类,Exception大类又分为UncheckedException(继承于RuntimeException)和CheckedException(继承于Exception,但不继承于RuntimeException)。常见异常关键字有:ERROR、Exception ERROR:AssertionError、OutOfMemoryError、StackOverflowError UncheckedException:AlreadyBoundException、ClassCastException、ConcurrentModificationException、IllegalArgumentException、IllegalStateException、IndexOutOfBoundsException、JSONException、NullPointerException、SecurityException、UnsupportedOperationException CheckedException:ClassNotFoundException、CloneNotSupportedException、FileAlreadyExistsException、FileNotFoundException、InterruptedException、IOException、SQLException、TimeoutException、UnknownHostException # 上述参考:http://www.importnew.com/27348.html 以时间段查看日志、先查看日志的时间格式,使用sed命令截取特定时间段日志,在过滤异常关键字,如下: sed -n '/起始时间/,/结束时间/p' 日志文件 sed -n '/2018-12-06 00:00:00/,/2018-12-06 00:03:00/p' logname # 查询三分钟内的日志,后再跟grep 过滤相应关键字 sed -n '/2018-12-06 08:38:00/,$p' logname | less # 查询指定时间到当前日志 # ps:禁止使用vim直接打开日志文件 2、数据库相关 Java应用非常多瓶颈在数据库,一条sql没写好导致慢查询,可能会导致整个应用挂起 关注日志中出现的Could not get JDBC Connection,JDBCException 参考:http://docs.jboss.org/hibernate/orm/3.2/api/org/hibernate/JDBCException.html 此时需要查看数据库连接请求、是否连接数过大,是否出现死锁、查看数据库慢日志定位具体SQL 3、JVM相关 Java虚拟机相关的问题一般多是下面几种问题:gc时间过长、OOM、死锁、线程block、线程数暴涨等问题。一般通过下面几个工具都能定位出问题。 常用的JDK监控和故障处理工具:jps, jstack, jmap、jstat, jconsole, jinfo, jhat, javap, btrace、TProfiler 名称 主要作用 jps JVM Process Status Tool,用来查看基于HotSpot的JVM里面中,所有具有访问权限的Java进程的具体状态, 包括进程ID,进程启动的路径及启动参数等等,与unix上的ps类似,只不过jps是用来显示java进程,可以把jps理解为ps的一个子集。 jstat JVM Statistics Monitoring Tool,jstat是用于监视虚拟各种运行状态信息的命令行工具,它可以显示本地或者远程虚拟机进程中的类装载、内存、垃圾收集、JIT编译等运行数据。 jinfo Configuration info for java,命令的作用是实时的查看和调整虚拟机的参数。 jmap Memory Map for java,生成虚拟机的内存转储快照(heapdump) jhat JVM Heap Dump Browser,用于分析heapdump文件,它会建立一个Http/HTML服务器,让用户可以在浏览器上查看分析结果 jstack Stack Trace for java,显示虚拟机的线程快照。 使用--help,查看命令具体使用 jps -v jstat -gc 118694 500 5 jmap -dump:live,format=b,file=dump.hprof 29170 jmap -heap 29170 jmap -histo:live 29170 | more jmap -permstat 29170 jstack -l 29170 |more 参考连接: JVM性能调优监控工具jps、jstack、jmap、jhat、jstat使用详解:https://blog.csdn.net/wisgood/article/details/25343845 JVM的常用性能监控工具jps、jstat、jinfo、jmap、jhat、jstack:https://blog.csdn.net/u010316188/article/details/80215884 JVM系列五:JVM监测&工具[整理中]:https://www.cnblogs.com/redcreen/archive/2011/05/09/2040977.html jvm系列五:监测命令(jvisualvm jps jstat jmap jhat jstack jinfo)及dump堆内存快照分析:https://blog.csdn.net/xybelieve1990/article/details/53516437 JVM学习之jstat使用方法:https://www.cnblogs.com/parryyang/p/5772484.html jstat命令查看jvm的GC情况 (以Linux为例):https://www.cnblogs.com/yjd_hycf_space/p/7755633.html java进程CPU过高排查:https://www.cnblogs.com/Dhouse/p/7839810.html https://stackify.com/java-performance-tools-8-types-tools-need-know/ https://stackoverflow.com/questions/97599/static-analysis-tool-recommendation-for-java 3.1、OOM相关 发生OOM问题一般服务都会crash,业务日志会有OutOfMemoryError。 OOM一般都是出现了内存泄露,须要查看OOM时候的jvm堆的快照,假设配置了-XX:+HeapDumpOnOutOfMemoryError, 在发生OOM的时候会在-XX:HeapDumpPath生成堆的dump文件。结合MAT,能够对dump文件进行分析。查找出发生OOM的原因. 关于MAT使用不详述了,google上一堆(http://inter12.iteye.com/blog/1407492)。 1、server的内存一般较大,所以要保证server的磁盘空间大于内存大小 2、另外手动dump堆快照。能够使用命令jmap -dump:format=b,file=file_name pid 或者kill -3 pid 3.2、死锁 死锁原因是两个或者多个线程相互等待资源。现象通常是出现线程hung住。更严重会出现线程数暴涨,系统出现api alive报警等。查看死锁最好的方法就是分析当时的线程栈。 # 详细case 能够參考jstack命令里面的样例 用到的命令: jps -v jstack -l pid 3.3、线程block、线程数暴涨 jstack -l pid |wc -l jstack -l pid |grep "BLOCKED"|wc -l jstack -l pid |grep "Waiting on condition"|wc -l 线程block问题通常是等待io、等待网络、等待监视器锁等造成,可能会导致请求超时、造成造成线程数暴涨导致系统502等。 假设出现这样的问题,主要是关注jstack 出来的BLOCKED、Waiting on condition、Waiting on monitor entry等状态信息。 假设大量线程在“waiting for monitor entry”:可能是一个全局锁堵塞住了大量线程。 假设短时间内打印的 thread dump 文件反映。随着时间流逝。waiting for monitor entry 的线程越来越多,没有降低的趋势,可能意味着某些线程在临界区里呆的时间太长了,以至于越来越多新线程迟迟无法进入临界区。 假设大量线程在“waiting on condition”:可能是它们又跑去获取第三方资源,迟迟获取不到Response,导致大量线程进入等待状态。 假设发现有大量的线程都处在 Wait on condition,从线程堆栈看,正等待网络读写,这可能是一个网络瓶颈的征兆,由于网络堵塞导致线程无法运行。 3.4、gc时间过长 参考:http://www.oracle.com/technetwork/cn/articles/java/g1gc-1984535-zhs.html 4、Server本身问题 排查:CPU、Memory、IO、Network 常用命令:top/htop 、free、iostat/iotop、netstat/ss 关注网络连接: 查看tcp各个状态数量:netstat -an | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}' 查看连接某服务端口最多的IP:netstat -na | grep 172.16.70.60:1111 | awk '{print $5}' | cut -d : -f1 | sort | uniq -c | sort -rn | head -10 参考:https://www.cnblogs.com/mfmdaoyou/p/7349117.html
OS X键盘快捷键Cheatsheet 以下是OS X最有用和最常用的键盘快捷键列表。使用这些键盘快捷键可以提高您使用Mac时的工作效率,强烈建议您学习这些快捷键。 选择一些,在一天内学习它们,然后在几天后回来并开始使用其他人。很快你就会记住它们! 最常见的快捷方式 剪切:删除所选项目并将其复制到剪贴板。
您需要的74个最佳OS X(Mac OS)应用程序(2018) 你刚买了一台新的Apple Mac Mac OS(OS X)机器,你想知道要安装的顶级Mac OS应用程序是什么吗?或者你可能有一段时间没有苹果Mac,但想知道你错过了什么?好吧,本指南几乎涵盖了在OS X Mac(Mac OS)上需要做的所有事情! 我首先列出了最重要的 - 他们(大多数)是免费的,非常棒,非常有用。 最近更新时间:2018年9月13日 有个建议吗?或者想要添加您的产品?电邮我! 如果您对最佳Mac OS应用程序有任何建议,请与我联系!我没有包含Mac OS附带的任何默认应用程序。 所有这些应用程序实际上都是我使用的。每当我得到一台新机器时,我实际来到这个页面并下载所有内容!我经常浏览我的应用程序,看看我是否经常使用此列表中没有的任何内容并更新它。 请给我发电子邮件给我你的建议 - 但我倾向于坚持使用我已经使用的应用程序! 顺便说一句,截至2018年9月,我从未接受过此列表中包含的任何应用程序的任何付款,而且我没有使用任何会员链接。这里有几个谷歌广告,但这只是为了支付微小的托管和域名费用。 Mac OS X应用程序类别 基本应用程序(4) 常规实用程序(16) 文件系统和磁盘空间相关实用程序(5) 待办事项列表(5) Office套件应用程序(5) 电子邮件客户端(1) 书写/博客工具(3) 照片编辑和操作(6) ) 视频编辑/操作(3) Torrent(2) Web浏览器(3) 即时消息/社交(2) 安全(3) 媒体,娱乐,照片和音乐(2) 开发人员和高级用户(7) 文本编辑器(5) 数据备份(2) 强烈推荐! 每次复制文本时,Flycut都会将其存储在历史记录中。稍后您可以使用Shift-Command-V粘贴它,即使您当前的剪贴板中有不同的内容也是如此。您可以更改首选项中的热键和其他设置。 价格:免费 获取Flycut剪贴板管理器 更好的触控工具 BetterTouchTool是一款出色的功能丰富的应用程序,允许您为Magic Mouse,MacBook Trackpad和Magic Trackpad以及普通鼠标的鼠标手势配置许多手势。它还允许您配置键盘快捷键,普通鼠标按钮和Apple Remote遥控器的操作。除此之外,它还有一个iOS伴侣应用程序(BTT Remote),它也可以配置为以您想要的方式控制您的Mac。 价格:免费 获得更好的触摸工具 只要您连接到Internet,应用程序就可以将他们想要的任何信息发送到他们想要的任何地方。有时他们会根据您的明确要求,出于充分的理由这样做。但他们往往不这样做。Little Snitch拦截这些不需要的连接尝试,让您决定如何继续。 价格:34.95美元(30天免费试用) 得到小飞贼 使用iStat Menus,您可以即时查看有关MacBook,MacMini或其他Mac OS应用程序的统计数据 - 例如硬盘空间,温度,wifi统计数据,世界时间等。 价格:18.00美元 获得国家菜单 EasyFind 想想Mac OS X的Spotlight可以使用一些帮助,特别是在搜索文本文件时?下载EasyFind,Spotlight的替代品(或补充),可在任何文件中查找文件,文件夹或内容,无需编制索引。EasyFind对那些厌倦了缓慢或不可能的索引,过时或损坏的索引,或那些只是寻找Finder或Spotlight中缺少的功能的人特别有用。 价格:免费 获取EasyFind 数据救援4 Data Rescue是一款硬盘恢复软件,可以从崩溃,损坏或未安装的硬盘驱动器,意外重新格式化的硬盘驱动器或重新安装的操作系统,或以前删除,损坏或丢失的文件中恢复照片,视频,文档。 价格:免费(2gb恢复)或49-299美元 获取数据救援4 事情是一个令人愉快和易于使用的任务管理器。您将立即开始,进入并组织您的待办事项。您将发现事物如何真正提高您的工作效率。很快你就会意识到实现目标更自然 - 一次一个待办事项。 价格:49.99美元(在他们的网站上免费试用) 应用商店链接 使用Numbers for Mac,复杂的电子表格只是一个开始。整张纸是你的画布。只需添加戏剧性的交互式图表,表格和图像,即可绘制出数据的显示图片。您可以在Mac和iOS设备之间无缝工作。并且可以毫不费力地与使用Microsoft Excel的人一起工作。针对OS X El Capitan进行了更新,数字现在比以往更加强大。 价格:19.99美元 获取Numbers App Store链接 适用于Mac的Pages是一款功能强大的文字处理程序,可为您提供创建美观文档所需的一切。阅读精美。经过重新设计的OS X El Capitan,它可让您在Mac和iOS设备之间无缝工作。甚至可以毫不费力地与使用Microsoft Word的人一起工作。 价格:19.99美元 App Store链接 Mac的Keynote使创建和提供精美的演示文稿变得简单。针对OS X El Capitan进行了更新,Keynote采用了强大的工具和令人眼花缭乱的效果,将您的想法变为现实。您可以在Mac和iOS设备之间无缝工作。并且可以毫不费力地与使用Microsoft PowerPoint的人一起工作。 价格:19.99美元 获取Keynote App Store链接 免费办公室 打开在大量应用程序(如OpenOffice,Microsoft Office,Microsoft Visio,WordPerfect,Quattro Pro,Lotus 1-2-3,AutoCAD)中创建的文本文档,电子表格,演示文稿和绘图。甚至是在旧的和历史应用程序(如MacWrite和ClarisWorks)中创建的文档 价格:免费 获取Libre Office App Store链接 Microsoft Office for Mac 毫无疑问是专为Mac设计的Office。快速入门,使用新的现代版Word,Excel,PowerPoint,Outlook和OneNote,结合熟悉的Office和您喜爱的独特Mac功能。 价格:149.99美元 获取Microsoft Office for Mac Airmail是从头开始设计的,可以保留与单个或多个帐户相同的体验,并提供快速,现代和易用的用户体验。Airmail很干净,可以让您不间断地收到电子邮件 - 它是21世纪的邮件客户端。 价格:9.99美元 获取Airmail MarsEdit Mac的#1桌面博客编辑器 - 编写,预览和发布博客的最佳方式。通过标准的MetaWeblog和AtomPub接口,可与WordPress,Blogger,Tumblr,TypePad,Movable Type等数十种产品配合使用。使用Mac上的本地草稿离线工作,预览帖子的格式和内容,并在您准备好与世界分享时发布。轻松浏览iPhoto,Aperture或Lightroom库中的照片,并将其嵌入到您的博客帖子中进行自动上传。非常适合专业博主和休闲作家,他们不想惹恼笨重的基于网络的界面。如果你有幸拥有一台Mac,那么没有什么比MarsEdit更强大或更优雅了。 价格:39.99美元 获取MarsEdit App Store链接 你在Tumblr上发表博客吗?然后你应该得到Tumblr应用程序。它允许您从Mac上的几乎任何位置发布到Tumblr。如果窗口有共享按钮,您可以将该窗口中的内容共享给Tumblr。桌面上有一张照片?只需点击几下鼠标即可将其直接放到您的博客上。 它也是一个浏览器,可以访问Tumblr,只能访问Tumblr。 价格:免费 获取Tumblr App 世界上最好的成像和设计工具集现在为您带来更多创意可能性,将您的桌面和移动应用程序与您的所有创意资产相连接,这样您就可以在任何设备或屏幕上制作出色的视觉内容。 价格:每月9.99美元起 获取Photoshop Pixelmator Pixelmator充分利用最新的Mac技术,为您提供快速,强大的工具,让您可以触摸和增强图像,绘制或绘画,应用炫目效果,或创建非常简单的高级乐曲。一旦您的图像准备就绪,可以使用iCloud随时随地访问它们,将它们发送到iPhoto或Aperture,通过电子邮件发送,打印,共享或保存为流行的图像格式 - 所有这些都可以从Pixelmator完成。 价格:29.99美元 获取Pixelmator App Store链接 Gif Brewery GIF Brewery是Mac OS X上GIF创建者的最佳视频.GIF Brewery允许您将视频文件中的剪辑转换为GIF。不再需要从电影中提取帧并摆弄Adobe Photoshop。让GIF Brewery为您完成所有艰苦的工作。 价格:4.99美元 获得Gif啤酒厂 iMovie中 iMovie拥有流线型设计和直观的编辑功能,让您可以享受前所未有的视频和故事。浏览您的视频库,分享最喜欢的时刻,并创建可以以高达4K的分辨率编辑的精美电影。您甚至可以在iPhone或iPad上开始编辑电影,并在Mac上完成。当您的电影准备好进行大型首映时,您可以在iMovie Theatre的所有设备上欣赏它。 价格:14.99美元 获取iMovie uTorrent的 一个(非常)微小的BitTorrent客户端 - μTorrent稍微超过1 MB(小于数码照片!)。它安装速度超快,永远不会占用宝贵的系统资源。尽可能快速有效地下载文件,而不会降低其他在线活动的速度。 价格:免费 得到uTorrent 苹果浏览器 Safari比其他浏览器更快,更节能,因此网站响应更快,笔记本电池的充电时间更长。内置隐私功能有助于您浏览业务。便捷的工具可帮助您保存,查找和分享您的收藏夹。Safari可与iCloud配合使用,让您无缝浏览所有设备。 价格:免费 获取Safari 强烈推荐! 现在在2018年,我已经切换(我一直习惯使用Sophos Anti Virus),只使用AVG for Mac。它很快,不会占用资源,一次或两次弹出警报。比抱歉更安全。我还列出了几个替代方案。 获取适用于Mac的AVG Anti Virus Go2Shell 使用Finder并且想要打开当前目录的终端窗口时,这是一个非常方便的工具。安装此应用程序,将其拖动到查找器窗口(应用程序页面上的说明),并且在查找程序窗口中始终有一个按钮,用于单击并打开该目录中的终端窗口 价格:免费 获取Go2Shell Xcode中 这是用于构建在Apple TV,Apple Watch,iPhone,iPad和Mac上运行的应用程序的完整Xcode开发人员工具集。它包括Xcode IDE,模拟器以及为iOS,watchOS,tvOS和OS X构建应用程序所需的所有工具和框架。 价格:免费 获取Xcode App Store链接 的TextWrangler Text Wrangler是一个通用的文本编辑器,用于轻型合成,数据文件编辑(数据文件由普通的[无样式]文本组成),以及文本导向数据的操作。TextWrangler支持使用纯文本和Unicode文件(除了使用从右到左书写系统编写的文件,例如希伯来语或阿拉伯语)。 价格:免费 获取TextWrangler 的BBEdit BBEdit用于Macintosh的专业HTML和文本编辑器。这款屡获殊荣的产品专为满足Web作者和软件开发人员的需求而精心打造,为编辑,搜索和操作文本提供了丰富的高性能功能。智能界面可轻松访问BBEdit的一流功能,包括grep模式匹配,跨多个文件的搜索和替换,项目定义工具,多种源代码语言的函数导航和语法着色,代码折叠,FTP和SFTP打开和保存,AppleScript,Mac OS X Unix脚本支持,文本和代码完成,当然还有一整套强大的HTML标记工具。 价格:49.99美元 获取BBEdit Coda是您在一个漂亮的应用程序中手动编码网站所需的一切。一个快速,干净,功能强大的文本编辑器。像素完美的预览。一种打开和管理本地和远程文件的内置方法。也许是一点点SSH。跟你说,Coda。 价格:99美元 得到Coda Atom是一个文本编辑器,它是现代的,平易近人的,但却可以对核心进行攻击 - 一个可以自定义的工具,可以在不触及配置文件的情况下高效地使用。它适用于OS X,Windows和Linux,它具有许多很酷的功能,如内置包管理器,智能自动完成,主题支持和完全可定制。 价格:免费 (这是一个联盟链接 - 但我真的把它们当作客户使用并且认为它们很棒。去Google的网站以避免联盟链接,但它不会花费你任何东西,它有助于支付运行这个的账单现场。) 价格:每台5美元/月或50美元/年,无限制数据备份 获得Backblaze
# Script ffmpeg compile for Centos 7.x # Alvaro Bustos, thanks to Hunter. # Updated 5-8-2018 # URL base https://trac.ffmpeg.org/wiki/CompilationGuide/Centos # Install libraries yum install -y autoconf automake bzip2 cmake freetype-devel gcc gcc-c++ git libtool make mercurial pkgconfig zlib-devel x264-devel x265-devel # Install yasm from repos # Create a temporary directory for sources. SOURCES=$(mkdir ~/ffmpeg_sources) cd ~/ffmpeg_sources # Download the necessary sources. curl -O http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz wget http://www.nasm.us/pub/nasm/releasebuilds/2.13.02/nasm-2.13.02.tar.bz2 # git clone --depth 1 http://git.videolan.org/git/x264 wget ftp://ftp.videolan.org/pub/videolan/x264/snapshots/x264-snapshot-20180720-2245.tar.bz2 wget https://bitbucket.org/multicoreware/x265/downloads/x265_2.8.tar.gz git clone --depth 1 https://github.com/mstorsjo/fdk-aac curl -O -L http://downloads.sourceforge.net/project/lame/lame/3.100/lame-3.100.tar.gz wget http://www.mirrorservice.org/sites/distfiles.macports.org/libopus/opus-1.2.1.tar.gz wget https://ftp.osuosl.org/pub/xiph/releases/ogg/libogg-1.3.3.tar.gz wget http://ftp.osuosl.org/pub/xiph/releases/vorbis/libvorbis-1.3.6.tar.gz curl -O -L https://ftp.osuosl.org/pub/xiph/releases/theora/libtheora-1.1.1.tar.gz git clone --depth 1 https://chromium.googlesource.com/webm/libvpx.git wget http://ffmpeg.org/releases/ffmpeg-4.0.tar.gz # Unpack files for file in `ls ~/ffmpeg_sources/*.tar.*`; do tar -xvf $file cd nasm-*/ ./autogen.sh ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" make install cd .. cp /root/bin/nasm /usr/bin cd yasm-*/ ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" && make && make install; cd .. cp /root/bin/yasm /usr/bin cd x264-*/ PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static && make && make install; cd .. cd /root/ffmpeg_sources/x265_2.8/build/linux cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source && make && make install; cd ~/ffmpeg_sources cd fdk-aac autoreconf -fiv && ./configure --prefix="$HOME/ffmpeg_build" --disable-shared && make && make install; cd .. cd lame-*/ ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --disable-shared --enable-nasm && make && make install; cd .. cd opus-*/ ./configure --prefix="$HOME/ffmpeg_build" --disable-shared && make && make install; cd .. cd libogg-*/ ./configure --prefix="$HOME/ffmpeg_build" --disable-shared && make && make install; cd .. cd libvorbis-*/ ./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared && make && make install; cd .. cd libtheora-*/ ./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared && make && make install; cd .. cd libvpx ./configure --prefix="$HOME/ffmpeg_build" --disable-examples --disable-unit-tests --enable-vp9-highbitdepth --as=yasm && make && make install; cd .. cd ffmpeg-*/ PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --pkg-config-flags="--static" --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" --extra-libs=-lpthread --extra-libs=-lm --bindir="$HOME/bin" --enable-gpl --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libtheora --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree && make && make install && hash -r; cd .. cd ~/bin cp ffmpeg ffprobe lame x264 /usr/local/bin cd /root/ffmpeg_build/bin cp x265 /usr/local/bin echo "FFmpeg Compilation is Finished!"
使用KVM(基于内核的虚拟机)+ QEMU的虚拟化。 需要具有Intel VT或AMD-V功能的CPU。 安装KVM [root@kvm-centos7 ~]# yum -y install qemu-kvm libvirt virt-install bridge-utils # 确保模块已加载 [root@kvm-centos7 ~]# lsmod | grep kvm kvm_intel 170181 0 kvm 554609 1 kvm_intel irqbypass 13503 1 kvm [root@kvm-centos7~]# systemctl start libvirtd [root@kvm-centos7~]# systemctl enable libvirtd 为KVM虚拟机配置桥接网络 参考:http://blog.csdn.net/wh211212/article/details/54135565 实验环境:OS:CentOS Linux release 7.3.1611 (Core)Network:双网卡bonding硬件:DELL R420,16G 1CPU 4核 # 网卡配置,新建ifcfg-bro,然后修改相关配置如下: [root@kvm-centos7 ~]# cd /etc/sysconfig/network-scripts/ [root@kvm-centos7 network-scripts]# cat ifcfg-br0 DEVICE="br0" ONBOOT="yes" TYPE="Bridge" BOOTPROTO=static IPADDR=192.168.1.133 # 自定义 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DEFROUTE=yes # ifcfg-bond0配置文件修改 [root@kvm-centos7 network-scripts]# cat ifcfg-bond0 DEVICE=bond0 TYPE=Ethernet NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none BRIDGE=br0 ONBOOT=yes BONDING_OPTS="mode=5 miimon=100" 桥接网络配置完成重启网络服务,查看ifconfig如下: [root@kvm-centos7 network-scripts]# systemctl restart network 查看ifconfig,看网络服务是否正常 创建虚拟机 安装GuestOS并创建虚拟机。此示例显示安装CentOS 7 通过网络在文本模式上安装GuestOS,虚拟机的映像默认放置在/var/lib/libvirt/images作为存储池,但本示例显示创建和使用新的存储池。 [root@kvm-centos7~]# mkdir -p /var/kvm/images # 创建新的存储池 [root@kvm-centos7 ~]# virt-install \ --name elk \ --ram 4096 \ --disk path=/var/kvm/images/elk.img,size=30 \ --vcpus 2 \ --os-type linux \ --os-variant rhel7 \ --network bridge=br0 \ --graphics none \ --console pty,target_type=serial \ --location 'http://mirrors.aliyun.com/centos/7/os/x86_64/' \ --extra-args 'console=ttyS0,115200n8 serial' 正常加载状态如下: 上面指定的相关参数含义如下:更多参考man virt-install --name 指定虚拟机的名称 --ram 指定Virtual Machine --disk的内存量path = xxx,size = xxx 'path ='⇒指定虚拟机 size ='⇒指定虚拟机的磁盘数量 --vcpus 指定虚拟CPU --os-type 指定GuestOS 的类型 --os-variant 指定GuestOS的类型 - 可能确认列表中使用以下命令osinfo-query os --network 指定虚拟机的网络类型 --graphics 指定图形的类型。如果设置为“无”,则意味着非图形。 --console 指定控制台类型 --location 指定安装的位置,其中from --extra-args 指定在内核中设置的参数 在文本模式下安装,与常见的安装步骤相同。安装完成后,首先重新启动,然后登录提示如下所示。 重新安装kvm虚拟机,记录安装步骤 virt-install -d --virt-type=kvm --name=aniu-saas-1 --vcpus=8 --memory=12288 --location=/media/CentOS-7-x86_64-Minimal-1611.iso --disk path=/dev/cl/aniu-saas-1 --network bridge=br0 --graphics none --extra-args='console=ttyS0' --force 注:命令行安装操作比较麻烦,注意多看提示。 下面附上笔者网卡配置信息 [root@aniu-saas network-scripts]# cat ifcfg-br0 DEVICE="br0" TYPE="Bridge" BOOTPROTO="none" DEFROUTE="yes" NAME="br0" ONBOOT="yes" IPADDR="192.168.0.205" PREFIX="24" GATEWAY="192.168.0.1" DNS1="114.114.114.114" [root@aniu-saas network-scripts]# cat ifcfg-em1 TYPE="Ethernet" NAME="em1" UUID="999a275e-eac8-4323-bdf8-f7c7434b7737" DEVICE="em1" ONBOOT="yes" BRIDGE="br0" location参数笔者建议换成http或者nfs的加载系统镜像。 安装成功界面如下图: 安装完成后,由于安装的时候没有配置网络,发现虚拟机也没有自动分配网络,就添加了虚拟机网络,参考如下: [root@localhost network-scripts]# cat ifcfg-eth0 TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no NAME=eth0 UUID=a38ceceb-5f4e-4d08-a108-d83c176ea85b DEVICE=eth0 ONBOOT=yes IPADDR="192.168.0.206" PREFIX="24" GATEWAY="192.168.0.1" DNS1="114.114.114.114"
ovirt节点直接使用virsh操作vm需要用户名密码 [root@ovirt4 ~]# virsh list --all Please enter your authentication name: vdsm@ovirt Please enter your password: Id Name State ---------------------------------------------------- 3 vm-03 running 5 vm-04 running 用户(vdsm@ovirt)获取 [root@ovirt3 ~]# find / -name libvirtconnection.py /usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py [root@ovirt3 ~]# egrep vdsm@ovirt /usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py SASL_USERNAME = "vdsm@ovirt" 密码(shibboleth)获取 [root@ovirt3 ~]# find / -name libvirt_password /etc/pki/vdsm/keys/libvirt_password [root@ovirt3 ~]# cat /etc/pki/vdsm/keys/libvirt_password shibboleth #密码 备注:不建议使用:# saslpasswd2 -a libvirt admin 创建admin用户来管理vm,但可以使用
安装依赖,CentOS7最小化安装 yum -y install epel-release && yum update -y sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config yum install vim net-tools -y 为了方便设置三台redis节点ssh免密,同步hosts如下: # redis1为例: # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain # redis cluster 192.168.0.117 redis1 ecs-117 192.168.0.118 redis2 ecs-118 192.168.0.119 redis3 ecs-119 redis1编译安装redis4.0.9 yum groupinstall "Development Tools" -y && yum install tcl wget -y wget http://download.redis.io/releases/redis-4.0.9.tar.gz tar zxvf redis-4.0.9.tar.gz -C /data/ ln -s /data/redis-4.0.9/ /data/redis cd /data/redis && make # 创建redis常用命令目录 mkdir /data/redis/bin cp /data/redis/src/{redis-benchmark,redis-check-aof,redis-check-rdb,redis-cli,redis-sentinel,redis-server,redis-shutdown,redis-trib.rb} /data/redis/bin/ 笔者redis cluster目录结构,以redis1为例: [root@redis1 data]# tree -d /data -L 1 /data ├── redis -> /data/redis-4.0.9/ ├── redis-4.0.9 └── redis-cluster # tree -d /data/redis-cluster/ /data/redis-cluster/ ├── conf ├── nodes # 存放redis 集群节点的配置文件 └── scripts # redis 集群维护脚本 # conf存放redis实例配置文件:7000.conf、7003.conf 脚本具体内容 [root@redis1 redis-cluster]# tree scripts/ scripts/ ├── bgrewriteaof.sh ├── redis-7000.sh └── redis-7003.sh 0 directories, 3 files [root@redis1 redis-cluster]# cat scripts/bgrewriteaof.sh #!/bin/bash ############################################# # Functions: start & stop redis cluster node # ChangeLog: # 2018-05-30 shaonbean@qq.com initial ############################################ # set -x #DEBUG REDIS_BIN_DIR="/data/redis/bin" REDIS_CLUSTER_DIR="/data/redis-cluster" REDIS_CLI="$REDIS_BIN_DIR/redis-cli" REDIS_SERVER="$REDIS_BIN_DIR/redis-server" PORTLIST=(7000 7003) bgrewriteaof() { for REDIS_PORT in ${PORTLIST[@]}; REDIS_NAME="redis-$REDIS_PORT" REDIS_CONFIG="/data/redis-cluster/conf/$REDIS_PORT.conf" REDIS_PASS="Aniuredis123" echo -n $"Rewriteaof $REDIS_NAME: " $REDIS_CLI -p $REDIS_PORT -a $REDIS_PASS BGREWRITEAOF retval=$? [ $retval -eq 0 ] && echo " $REDIS_NAME bgrewriteaof succeed!" bgrewriteaof # 定时进行aof重写 [root@redis1 redis-cluster]# cat scripts/redis-7000.sh # redis实例启动脚本 #!/bin/bash ############################################# # Functions: start & stop redis cluster node # ChangeLog: # 2018-05-30 shaonbean@qq.com initial ############################################ # set -x #DEBUG REDIS_BIN_DIR="/data/redis/bin" REDIS_CLUSTER_DIR="/data/redis-cluster" REDIS_CLI="$REDIS_BIN_DIR/redis-cli" REDIS_SERVER="$REDIS_BIN_DIR/redis-server" REDIS_USER="root" REDIS_PORT=7000 REDIS_NAME="redis-$REDIS_PORT" REDIS_CONFIG="/data/redis-cluster/conf/$REDIS_PORT.conf" REDIS_PASS="Aniuredis123" start() { [ -f $REDIS_CONFIG ] || exit 6 [ -x $REDIS_SERVER ] || exit 5 echo -n $"Starting $REDIS_NAME: " $REDIS_SERVER $REDIS_CONFIG retval=$? [ $retval -eq 0 ] && echo " $REDIS_NAME start succeed!" stop() { echo -n $"Stopping $REDIS_NAME: " $REDIS_CLI -p $REDIS_PORT -a $REDIS_PASS shutdown retval=$? [ $retval -eq 0 ] && echo " $REDIS_NAME stop succeed!" restart() { start case "$1" in start) stop) restart) echo $"Usage: $0 {start|stop|restart}" exit 2 exit $? 7000.conf [root@redis1 conf]# cat 7000.conf bind 0.0.0.0 protected-mode yes port 7000 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /var/run/redis/redis_7000.pid loglevel notice logfile /var/log/redis/redis-7000.log databases 16 always-show-logo yes save "" stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump_7000.rdb dir /var/lib/redis slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 masterauth Aniuredis123 requirepass Aniuredis123 rename-command FLUSHALL "" #rename-command CONFIG "" maxclients 10000 maxmemory 4gb maxmemory-policy allkeys-lru maxmemory-samples 5 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no slave-lazy-flush no appendonly yes appendfilename "appendonly_7000.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble no lua-time-limit 5000 # cluster config cluster-enabled yes cluster-config-file /data/redis-cluster/nodes/nodes-7000.conf cluster-node-timeout 5000 cluster-slave-validity-factor 10 cluster-migration-barrier 1 cluster-require-full-coverage yes cluster-slave-no-failover no slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes 安装ruby环境 yum --enablerepo=centos-sclo-rh -y install rh-ruby24 scl enable rh-ruby24 bash # 启用ruby环境变量 # 写入到环境变量 vi /etc/profile.d/rh-ruby24.sh #!/bin/bash source /opt/rh/rh-ruby24/enable export X_SCLS="`scl enable rh-ruby24 'echo $X_SCLS'`" 创建集群安装依赖 # gem install redis Successfully installed redis-4.0.1 Parsing documentation for redis-4.0.1 Done installing documentation for redis after 2 seconds 1 gem installed 以redis1为例: [root@redis1 ~]# /data/redis/bin/redis-trib.rb create --replicas 1 192.168.0.117:7000 192.168.0.118:7001 192.168.0.119:7002 192.168.0.117:7003 192.168.0.118:7004 192.168.0.119:7005 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 192.168.0.117:7000 192.168.0.118:7001 192.168.0.119:7002 Adding replica 192.168.0.118:7004 to 192.168.0.117:7000 Adding replica 192.168.0.119:7005 to 192.168.0.118:7001 Adding replica 192.168.0.117:7003 to 192.168.0.119:7002 M: 1ea6b06582a23a1c73d06bc1aa32c3f4d8edcb24 192.168.0.117:7000 slots:0-5460 (5461 slots) master M: 2da83fad766d22198f064b31524d7e757af63a04 192.168.0.118:7001 slots:5461-10922 (5462 slots) master M: 77aa1ba8353e1dc303c1f3f2a553f82777aad5d6 192.168.0.119:7002 slots:10923-16383 (5461 slots) master S: 76feb9dfb7963a740e697bc9ecbdf4d18d1cdbb4 192.168.0.117:7003 replicates 77aa1ba8353e1dc303c1f3f2a553f82777aad5d6 S: 769eebd4bde8bbc58543585021bc34059d50ba23 192.168.0.118:7004 replicates 1ea6b06582a23a1c73d06bc1aa32c3f4d8edcb24 S: 6f2b369304597814af228b32c9c384581a34b900 192.168.0.119:7005 replicates 2da83fad766d22198f064b31524d7e757af63a04 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join... >>> Performing Cluster Check (using node 192.168.0.117:7000) M: 1ea6b06582a23a1c73d06bc1aa32c3f4d8edcb24 192.168.0.117:7000 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 76feb9dfb7963a740e697bc9ecbdf4d18d1cdbb4 192.168.0.117:7003 slots: (0 slots) slave replicates 77aa1ba8353e1dc303c1f3f2a553f82777aad5d6 M: 2da83fad766d22198f064b31524d7e757af63a04 192.168.0.118:7001 slots:5461-10922 (5462 slots) master 1 additional replica(s) M: 77aa1ba8353e1dc303c1f3f2a553f82777aad5d6 192.168.0.119:7002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 769eebd4bde8bbc58543585021bc34059d50ba23 192.168.0.118:7004 slots: (0 slots) slave replicates 1ea6b06582a23a1c73d06bc1aa32c3f4d8edcb24 S: 6f2b369304597814af228b32c9c384581a34b900 192.168.0.119:7005 slots: (0 slots) slave replicates 2da83fad766d22198f064b31524d7e757af63a04 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. redis集群使用密码设置 配置文件写入: masterauth Aniuredis123 requirepass Aniuredis123 2、设置密码之后如果需要使用redis-trib.rb的各种命令,报错解决: # 解决办法: [root@redis1 redis]# find / -name client.rb /opt/rh/rh-ruby24/root/usr/local/share/gems/gems/redis-4.0.1/lib/redis/client.rb class Client DEFAULTS = { :url => lambda { ENV["REDIS_URL"] }, :scheme => "redis", :host => "127.0.0.1", :port => 6379, :path => nil, :timeout => 5.0, :password => "passwd123", :db => 0, :driver => nil, :id => nil, :tcp_keepalive => 0, :reconnect_attempts => 1, :inherit_socket => false 注意:client.rb路径可以通过find命令查找:find / -name 'client.rb' 笔者redis 集群安装总结 # redis集群说明 - redis1是Redis集群的一个节点A,上面运行两个redis实例,7000 7003 - redis2是Redis集群的一个节点B,上面运行两个redis实例,7001 7004 - redis3是Redis集群的一个节点C,上面运行两个redis实例,7002 7005 - 假设集群包含A、B、C、A1、B1、C1六个节点 A、B、C为主节点对应Redis实例:7000 7001 7002 A1、B1、C1为从节点对应redis实例:7003 7004 7005 # 建议交叉设置主从节点,对应关系为 A > B1 B > C1 C > A1 # 创建集群 cd /data/redis/bin ./redis-trib.rb create --replicas 1 192.168.0.117:7000 192.168.0.118:7001 192.168.0.119:7002 192.168.0.117:7003 192.168.0.118:7004 192.168.0.119:7005 # redis集群性能测试 time /usr/local/redis/bin/redis-benchmark -h redis1 -p 7000 -a Aniuredis123 -c 200 -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q time /usr/local/redis/bin/redis-benchmark -h redis2 -p 7001 -a Aniuredis123 -c 200 -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q time /usr/local/redis/bin/redis-benchmark -h redis3 -p 7002 -a Aniuredis123 -c 200 -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q # redis故障模拟 # 参考redis.service设置redis实例开机自启动 echo never > /sys/kernel/mm/transparent_hugepage/enabled echo 511 > /proc/sys/net/core/somaxconn 建议参考笔者的目录结构去部署,快下班啦,后面有需要注意的问题在总结 redis 集群结合F5提供服务开源容器应用程序平台,Origin是支持OpenShift的上游社区项目。围绕Docker容器打包和Kubernetes容器集群管理的核心构建,Origin还增加了应用程序生命周期管理功能和DevOps工具。Origin提供了一个完整的开源容器应用程序平台。 安装OpenShift Origin,它是Red Hat OpenShift的开源版本 环境CentOS7: Hostname master.aniu.so Master, etcd, and node 192.168.0.111 node1.aniu.so Computer Node 192.168.0.114 node2.aniu.so Computer Node 192.168.0.115 安装依赖包,关闭防火墙,禁用selinux,添加sysctl net.ipv4.ip_forward=1,所有节点均执行 详情见:https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md#prerequisites参考:https://docs.openshift.org/latest/install_config/install/host_preparation.html#install-config-install-host-preparation yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -y yum install ansible pyOpenSSL -y 在所有节点上,安装OpenShift Origin 3.9存储库和Docker。 接下来,为Docker Direct LVM创建一个卷组,以便像下面那样设置LVM Thinpool。 # 以master为例,虽有节点都必须执行 [root@master ~]# yum -y install centos-release-openshift-origin37 docker [root@master ~]# vgcreate centos /dev/sdb1 #笔者vgname为centos Volume group "centos" successfully created [root@master ~]# echo VG=centos >> /etc/sysconfig/docker-storage-setup [root@master ~]# systemctl start docker [root@master ~]# systemctl enable docker 设置三个节点master节点与其他节点ssh免密。方便执行ansible-playbook # 在master节点上执行: [root@master ~]# ssh-keygen -t rsa # 创建免密设置 [root@master ~]# cat /etc/hosts # 参考笔者hosts,同步hosts到node1,node2 127.0.0.1 localhost localhost.localdomain pinpoint ########################################### ## openshift 192.168.0.113 master.aniu.so 192.168.0.114 node1.aniu.so 192.168.0.115 node2.aniu.so [root@master ~]# ssh-copy-id master.aniu.so /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option) [root@master ~]# ssh-copy-id node1.aniu.so /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option) [root@master ~]# ssh-copy-id node2.aniu.so /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option) 在主节点上,使用root登录并运行Ansible Playbook以设置OpenShift群集。 参考:https://docs.openshift.org/latest/install_config/install/stand_alone_registry.html yum -y install atomic-openshift-utils 配置ansible的hosts如下: [root@master ~]# cat /etc/ansible/hosts # add follows to the end [OSEv3:children] masters nodes [OSEv3:vars] # admin user created in previous section ansible_ssh_user=root openshift_deployment_type=origin # use HTPasswd for authentication openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/.htpasswd'}] openshift_master_default_subdomain=apps.test.aniu.so # allow unencrypted connection within cluster openshift_docker_insecure_registries=172.30.0.0/16 [masters] master.aniu.so openshift_schedulable=true containerized=false [etcd] master.aniu.so [nodes] # set labels [region: ***, zone: ***](any name you like) master.aniu.so openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.aniu.so openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_schedulable=true node2.aniu.so openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_schedulable=true # 运行deploy_cluster.yml手册以启动安装: ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml # 此步骤会很慢,笔者大概花了2个多小时。。。 正常执行完成查看状态 - 可能会出现的报错: Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list nodes at the cluster scope: User "system:anonymous" cannot list all nodes in the cluster 报错解决: [root@master ~]# oc login -u system:admin # 使用admin登录进行查看 Logged into "https://master.aniu.so:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': * default kube-public kube-system logging management-infra openshift openshift-infra openshift-node openshift-web-console Using project "default". [root@master ~]# oc get nodes NAME STATUS ROLES AGE VERSION master.aniu.so Ready master 4h v1.9.1+a0ce1bc657 node1.aniu.so Ready <none> 4h v1.9.1+a0ce1bc657 node2.aniu.so Ready <none> 4h v1.9.1+a0ce1bc657 [root@master ~]# oc get nodes --show-labels=true NAME STATUS ROLES AGE VERSION LABELS master.aniu.so Ready master 4h v1.9.1+a0ce1bc657 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.aniu.so,node-role.kubernetes.io/master=true,region=infra,zone=default node1.aniu.so Ready <none> 4h v1.9.1+a0ce1bc657 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1.aniu.so,region=primary,zone=east node2.aniu.so Ready <none> 4h v1.9.1+a0ce1bc657 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2.aniu.so,region=primary,zone=west 创建一个新用户用来登录openshift 在master节点上创建用户: [root@master ~]# htpasswd /etc/origin/master/.htpasswd aniu # 用任何操作系统用户登录,然后登录OpenShift群集,只需添加一个HTPasswd用户 [root@master ~]# oc login Authentication required for https://master.aniu.so:8443 (openshift) Username: aniu Password: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> [root@master ~]# oc whoami [root@master ~]# oc logout Logged "aniu" out on "https://master.aniu.so:8443" 可以从任何使用Web浏览器的客户端访问管理控制台。
CentOS7 安装pinpoint(开源APM工具pinpoint安装与使用) 参考教程:http://naver.github.io/pinpoint/ Pinpoint是用Java编写的大型分布式系统的APM(应用程序性能管理)工具。 受Dapper的启发,Pinpoint提供了一种解决方案,通过在分布式应用程序中跟踪事务来帮助分析系统的整体结构以及它们中的组件之间的相互关系。 Pinpoint-Collector:收集各种性能数据Pinpoint-Agent:和自己运行的应用关联起来的探针Pinpoint-Web:将收集到的数据显示成WEB网页形式HBase Storage:收集到的数据存到HBase中 https://github.com/naver/pinpoint/releases/ # 直接下载当前最新的war 快速安装参考(http://naver.github.io/pinpoint/quickstart.html) 安裝JDK yum -y install java-1.* # 包含jdk1.6,1.7,1.8,笔者用到的1.9是下载的rpm包,手动安装的,安装完成配置JAVA_HOME,如下: jdk9:http://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase9-3934878.html 编辑/etc/profile,最后添加 # java8,笔者默认使用java8作为默认jdk export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk export JAVA_8_HOME=/usr/lib/jvm/java-1.8.0-openjdk export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar # java 6 export JAVA_6_HOME=/usr/lib/jvm/java-1.6.0-openjdk.x86_64 # java 7 export JAVA_7_HOME=/usr/lib/jvm/java-1.7.0-openjdk # java 9 export JAVA_9_HOME=/usr/java/jdk-9.0.4 # 上面都需要设置,不然本地build的时候报错 下载最新代码: git clone https://github.com/naver/pinpoint.git ./mvnw install -Dmaven.test.skip=true # 自己build会非常慢,而且会有些报错,建议下载war 安装并启动HBase # 修改如下: VERSION=2.0.0 HBASE_VERSION=hbase-$VERSION HBASE_FILE=$HBASE_VERSION-bin.tar.gz HBASE_DL_URL=http://mirror.bit.edu.cn/apache/hbase/$VERSION/$HBASE_FILE HBASE_ARCHIVE_DL_URL=http://mirror.bit.edu.cn/apache/hbase/$VERSION/$HBASE_FILE Download & Start - Run quickstart/bin/start-hbase.sh # 注意,需要修改hbase的下载地址 Initialize Tables - Run quickstart/bin/init-hbase.sh 启动pinpoint服务 Collector - Run quickstart/bin/start-collector.sh TestApp - Run quickstart/bin/start-testapp.sh Web UI - Run quickstart/bin/start-web.sh 启动脚本完成后,Tomcat日志的最后10行将被拖尾到控制台: Collector TestApp Web UI 检查运行状态 一旦HBase和3个守护进程运行,可以访问以下地址来测试自己的Pinpoint实例。 Web UI - http://localhost:28080 TestApp - http://localhost:28081 可以使用TestApp UI将跟踪数据提供给Pinpoint,并使用Pinpoint Web UI检查它们。 TestApp将自己注册为TESTAPP下的测试代理 Web UI - Run quickstart/bin/stop-web.sh TestApp - Run quickstart/bin/stop-testapp.sh Collector - Run quickstart/bin/stop-collector.sh HBase - Run quickstart/bin/stop-hbase.sh # 注意执行quickstart目录下名称时,要移动到pinpoint家目录,笔者目录:/opt/pinpoint,笔者克隆pinpoint代码是在opt进行的 按步骤安装 为了构建Pinpoint,必须满足以下要求: https://naver.github.io/pinpoint/installation.html 1、安装Hbase http://hbase.apache.org/ 2、安装Java环境 3、安装Pinpoint Collector 4、安装Pinpoint Web 5、安装Pinpoint Agent 支持的模块 JDK 6+ Tomcat 6/7/8, Jetty 8/9, JBoss EAP 6, Resin 4, Websphere 6/7/8, Vertx 3.3/3.4/3.5 Spring, Spring Boot (Embedded Tomcat, Jetty) Apache HTTP Client 3.x/4.x, JDK HttpConnector, GoogleHttpClient, OkHttpClient, NingAsyncHttpClient Thrift Client, Thrift Service, DUBBO PROVIDER, DUBBO CONSUMER ActiveMQ, RabbitMQ MySQL, Oracle, MSSQL, CUBRID,POSTGRESQL, MARIA Arcus, Memcached, Redis, CASSANDRA iBATIS, MyBatis DBCP, DBCP2, HIKARICP gson, Jackson, Json Lib log4j, Logback 参考链接:http://naver.github.io/pinpoint/quickstart.html https://blog.csdn.net/neven7/article/details/51043307 agent配置(tomcat) # 把打包生成pinpoint-agent-1.8.0-SNAPSHOT.zip,拷贝到对应的agent服务器上,解压到/opt/pinpoint-agent # 修改tomcat的启动参数,编辑catalina.sh,添加如下: AGENT_PATH=/opt/pinpoint-agent AGENT_VERSION=1.8.0 AGENT_ID="agent2018052401" # 自定义 APPLICATION_NAME="message-channel-1-7081" # 自定义 CATALINA_OPTS="$CATALINA_OPTS -javaagent:$AGENT_PATH/pinpoint-bootstrap-$AGENT_VERSION-SNAPSHOT.jar" CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.agentId=$AGENT_ID" CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.applicationName=$APPLICATION_NAME" web查看监控的agent状态 安装注意事项 1、更改hbase下载地址 2、更改profiler-optional/pom.xml,去掉模块profiler-optional-jdk6,不然本地build过不去 3、web界面显示不了添加的应用,在对应的agent的服务器解析服务器的hosts,添加 127.0.0.1 $hostname
CentOS7 sudo rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm CentOS6 sudo rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el6/x86_64/nux-dextop-release-0-2.el6.nux.noarch.rpm sudo yum install ffmpeg ffmpeg-devel -y # 查看帮助 ffmpeg -h ffmpeg -i input.mp4 output.avi CentOS6安装FFmpeg最新版 参考:https://trac.ffmpeg.org/wiki/CompilationGuide/Centos 在CentOS上编译FFmpeg 安装依赖包 # 必须要安装的依赖包 yum install autoconf automake bzip2 cmake freetype-devel gcc gcc-c++ git libtool make mercurial pkgconfig zlib-devel cmake hg numactl numactl-devel freetype freetype-devel freetype-demos 在主目录下创建一个新目录,将所有源代码放入: # mkdir ~/ffmpeg_sources 编译和安装 编译安装前卸载直接yum安装的FFmpeg yum remove ffmpeg ffmpeg-devel nasm -y 安装NSAM cd ~/ffmpeg_sources curl -O -L http://www.nasm.us/pub/nasm/releasebuilds/2.13.02/nasm-2.13.02.tar.bz2 tar xjvf nasm-2.13.02.tar.bz2 cd nasm-2.13.02 ./autogen.sh ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" make install 安装Yasm cd ~/ffmpeg_sources curl -O -L http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz tar xzvf yasm-1.3.0.tar.gz cd yasm-1.3.0 ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" make install 安装libx264 cd ~/ffmpeg_sources git clone --depth 1 http://git.videolan.org/git/x264 cd x264 PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static make install 安装libx265 cd ~/ffmpeg_sources hg clone https://bitbucket.org/multicoreware/x265 cd ~/ffmpeg_sources/x265/build/linux cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source make install 安装libfdk_aac cd ~/ffmpeg_sources git clone --depth 1 https://github.com/mstorsjo/fdk-aac cd fdk-aac autoreconf -fiv ./configure --prefix="$HOME/ffmpeg_build" --disable-shared make install 安装libmp3lame cd ~/ffmpeg_sources curl -O -L http://downloads.sourceforge.net/project/lame/lame/3.100/lame-3.100.tar.gz tar xzvf lame-3.100.tar.gz cd lame-3.100 ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --disable-shared --enable-nasm make install libopus cd ~/ffmpeg_sources curl -O -L https://archive.mozilla.org/pub/opus/opus-1.2.1.tar.gz tar xzvf opus-1.2.1.tar.gz cd opus-1.2.1 ./configure --prefix="$HOME/ffmpeg_build" --disable-shared make install 安装libogg cd ~/ffmpeg_sources curl -O -L http://downloads.xiph.org/releases/ogg/libogg-1.3.3.tar.gz tar xzvf libogg-1.3.3.tar.gz cd libogg-1.3.3 ./configure --prefix="$HOME/ffmpeg_build" --disable-shared make install 安装libvorbis cd ~/ffmpeg_sources curl -O -L http://downloads.xiph.org/releases/vorbis/libvorbis-1.3.5.tar.gz tar xzvf libvorbis-1.3.5.tar.gz cd libvorbis-1.3.5 ./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared make install 安装libvpx # 这里坑了笔者两个多小时,笔者直接克隆的github上源码。configure过不去,一直报错,解决如下: cd ~/ffmpeg_sources wget https://github.com/webmproject/libvpx/archive/v1.7.0.tar.gz tar zxvf v1.7.0.tar.gz mv libvpx-1.7.0 libvpx cd libvpx ./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared #终于过去,高兴坏了 make install 安装FFmpeg # http://ffmpeg.org/releases/ 笔者这里用的是最新的开发版本,建议使用当前最新版本,比如:ffmpeg-4.0.tar.gz cd ~/ffmpeg_sources curl -O -L https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 tar xjvf ffmpeg-snapshot.tar.bz2 cd ffmpeg PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \ --prefix="$HOME/ffmpeg_build" \ --pkg-config-flags="--static" \ --extra-cflags="-I$HOME/ffmpeg_build/include" \ --extra-ldflags="-L$HOME/ffmpeg_build/lib" \ --extra-libs=-lpthread \ --extra-libs=-lm \ --bindir="$HOME/bin" \ --enable-gpl \ --enable-libfdk_aac \ --enable-libfreetype \ --enable-libmp3lame \ --enable-libopus \ --enable-libvorbis \ --enable-libvpx \ --enable-libx264 \ --enable-libx265 \ --enable-nonfree make # 这一步时间有点长 make install hash -r 现在编译完成,ffmpeg(也是ffprobe,ffserver,lame和x264)现在应该可以使用了,笔者下边文章介绍安装过程中遇到错误及解决办法,以后介绍如何更新或删除FFmpeg
An awesome & curated list of best applications and tools for Cloud Native. This Awesome Repository is highly inspired from cncf's landscape & Awesome. Items marked with Open-Source Software are open-source software. Items marked with Freeware are free. Cloud Native Services Components Cloud Provisioning Runtime Orchestration & Management Application Definition & Development Platform Serverless Observability & Analysis Cloud Public - A public cloud is a pool of virtual resources—developed from hardware owned and managed by a third-party company—that is automatically provisioned and allocated among multiple clients through a self-service interface. Alibaba Cloud - Alibaba Cloud develops highly scalable cloud computing and data management services. Amazon Web Services - Amazon Web Services provides information technology infrastructure services to businesses in the form of web services. Azure Cloud - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. Baidu Cloud - Baidu is a Chinese website and search engine that enables individuals to obtain information and find what they need. DigitalOcean - DigitalOcean is an IaaS company that delivers a seamless way for developers and businesses to deploy and scale any application in the cloud. Fujitsu K5 - Fujitsu provides information technology and communications solutions. Google Cloud - Google is a multinational corporation that is specialized in internet-related services and products. Huawei Cloud - Huawei Technologies provides infrastructure application software and devices with wireline, wireless, and IP technologies. IBM Cloud - IBM is an IT technology and consulting firm providing computer hardware, software, and infrastructure and hosting services. Oracle Cloud - Oracle is a computer technology corporation developing and marketing computer hardware systems and enterprise software products. Joyent Cloud - Your Cloud, Your Way Packet Cloud - Packet is a bare metal cloud built for developers. 8 minute deploys, no hypervisor, & full automation support from 15 global data centers. Tencent Cloud - Tencent is a Chinese internet service portal offering value-added internet, mobile, telecom, and online advertising services. Citrix Cloud - Move Faster, Work Better, Lower IT Costs Private - The private cloud is defined as computing services offered either over the Internet or a private internal network and only to select users instead of the general public. Also called an internal or corporate cloud, private cloud computing gives businesses many of the benefits of a public cloud - including self-service, scalability, and elasticity - with the additional control and customization available from dedicated resources over a computing infrastructure hosted on-premises. Openstack - Repository containing OpenStack repositories Scaleway - Scaleway is the world's first Cloud Computing IaaS platform Foreman - an application that automates the lifecycle of servers Digital Bebar - Digital Rebar Provision is a simple but powerful Golang executable that provides a complete API-driven DHCP/PXE/TFTP provisioning system. MAAS - Official MAAS repository mirror. (Do not submit pull requests or bugs here; use Launchpad instead.) VMware - VMware is a software company providing cloud and virtualization services. Hybrid - A hybrid cloud is a computing environment that combines a public cloud and a private cloud by allowing data and applications to be shared between them. Ensono - Complete hybrid IT services – from cloud to mainframe. Operate for today. Optimize for tomorrow. Dellemc - Dell EMC is a powerful part of Dell Technologies' commitment to your transformation Hpe - Hybrid Cloud Solutions Scalr - The Hybrid Cloud Management Platform IBM Z hybrid cloud Rackspace Hybrid Cloud Microsoft Hybrid Cloud VMware Hybrid Cloud AWS Hybrid Cloud Provisioning Container Registries Container Registry is a private Docker repository that works with popular continuous delivery systems. - [ECR](https://aws.amazon.com/cn/ecr/) - Amazon Elastic Container Registry (ECR) is a secure, fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. - [Azure Registry](https://azure.microsoft.com/en-us/services/container-registry/) - Manage a Docker private registry as a first-class Azure resource. - [Codefresh Registry](https://codefresh.io/registry-beta/) - Codefresh is a Docker-native CI/CD platform.Instantly build, test and deploy Docker images. - [Docker Registry](https://docs.docker.com/registry/) - Docker Trusted Registry (DTR) is a commercial product that enables complete image management workflow, featuring LDAP integration, image signing, security scanning, and integration with Universal Control Plane. DTR is offered as an add-on to Docker Enterprise subscriptions of Standard or higher. - [Google Container Registry](https://cloud.google.com/container-registry/) - High-speed, private Docker image storage on Google Cloud Platform - [Harbor](http://vmware.github.io/harbor/) - An Enterprise-class Container Registry Server based on Docker Distribution. - [JFrog Artifactory](https://jfrog.com/artifactory/) - Enterprise Universal Artifact Manager. - [Portus](http://port.us.org/) - Portus is an open source authorization service and user interface for the next generation Docker Registry. - [Project Atomic](https://www.projectatomic.io/) - Atomic Host provides immutable infrastructure for deploying to hundreds or thousands of servers in your private or public cloud. - [QUAY Enterprise](https://coreos.com/quay-enterprise/) - One container registry for your entire enterprise. Host Management & Tooling Host management tool - [Ansible](https://www.ansible.com/) - Ansible is designed around the way people work and the way people work together. - [Chef](https://www.chef.io) - Ship better software, faster.Enable collaboration and continuous automation across your infrastructure, applications, and compliance for all your apps and infrastructure. - [LniuxKit](https://github.com/linuxkit/linuxkit) - A toolkit for building secure, portable and lean operating systems for containers - [CFEngine](https://cfengine.com/) - CFEngine Community - [Puppet](https://puppet.com/) - Server automation framework and application - [Rundeck](https://www.rundeck.com/) - Enable Self-Service Operations: Give specific users access to your existing tools, services, and scripts - [Saltstack](https://saltstack.com/) - Intelligent automation for a software-defined world Infrastructure Automation Infrastructure automation makes servers and VM management more flexible, efficient, and scalable by converting management tasks and policy into code. - [AWS CloudFormation](https://aws.amazon.com/cn/cloudformation/) - [HOSH](https://bosh.io) - BOSH is an open source tool for release engineering, deployment, lifecycle management, and monitoring of distributed systems. - [Helm](https://helm.sh/) - Helm is the best way to find, share, and use software built for Kubernetes. - [Infrakit](https://github.com/docker/infrakit) - A toolkit for creating and managing declarative, self-healing infrastructure. - [Juju](https://jujucharms.com/) - Juju is an open source application modelling tool. Deploy, configure, scale and operate your software on public and private clouds. - [Cloud Coreo](https://www.cloudcoreo.com/) - A Platform for Modern Cloud Teams - [Cloudify](https://cloudify.co/) - Radically Simplifying Multi-Cloud Orchestration - [Kubicorn](http://kubicorn.io/) - Create, manage, snapshot, and scale Kubernetes infrastructure in the public cloud. - [ManageIQ](http://manageiq.org/) - Discover, Optimize, and Control your Hybrid IT - [Terraform](https://www.terraform.io/) - Write, Plan, and Create Infrastructure as Code Key Management Key management is the name of management of cryptographic keys in a cryptosystem. Secure Images Secure your images so that you maintain control of how they are displayed on the Internet. - [Notary](https://github.com/theupdateframework/notary) - Notary is a project that allows anyone to have trust over arbitrary collections of data - [TUF](https://theupdateframework.github.io/) - A framework for securing software update systems - [Aqua](https://www.aquasec.com/) - The Aqua Container Security Platform provides development-to-production lifecycle controls for securing containerized applications that run on-premises or in the cloud, on Windows or Linux, supporting multiple orchestration environments. - [Clair](https://coreos.com/clair) - Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. - [OpenSCAP](https://www.open-scap.org/) - Discover a wide array of tools for managing system security and standards compliance. - [Twistlock](https://www.twistlock.com/) - Container Security for Docker, Kubernetes and Beyond - [Anchore](https://anchore.com/) - An open source complete solution for compliance, certification, security scanning, and auditing of public and private container images. - [anchore.io](https://anchore.io/) - Discover, Analyze, and Certify Container Images. - [Black Duck](https://www.blackducksoftware.com/) - Complete Visibility. Automated Control. - [NeuVector](https://neuvector.com/) - Continuous Network Security for Kubernetes Containers - [Sonatype Nexus](https://www.sonatype.com/) - The world's best way to organize, store, and distribute software components. Runtime Cloud-Native Network Network Segmentation and Policy,SDN & APIs (eg CNI, libnetwork) Incubating CNCF Projects CNI - Container Network Interface - networking for Linux containers CNCF Member Products/Projects Aporeto - Cloud Native Security for Containers and Microservices Cannl - Policy based networking for cloud native applications Contiv - Container networking for various use cases Flannel - flannel is a network fabric for containers, designed for Kubernetes NSX - VMware is a software company providing cloud and virtualization services. Open vSwitch - Open vSwitch is a multilayer software switch licensed under the open source Apache 2 license. OpenContrial - An open-source network virtualization platform for the cloud. Project Calico - Cloud native application connectivity and network policy Weave Net - Simple, resilient multi-host Docker networking and more. Non-CNCF Member Products/Projects Aviatrix - The company develops software that enables enterprises to build hybrid clouds by easily Big Switch Networks - Big Switch Networks is the Next-Generation Data Center Networking Company, designing intelligent, agile and flexible networks Cilium - HTTP, gRPC, and Kafka Aware Security and Networking for Containers with BPF and XDP Cumulus - Cumulus Networks, a software company, designs, and sells Linux operating systems for networking hardware. GuardiCoreCentra - GuardiCore provides network security solutions for software defined data centers. MidoNet - MidoNet is an Open Source network virtualization system for Openstack clouds Nuage Networks - Nuage Networks Fundamentals: Software Defined Networking for the Datacenter and Beyond. Plumgrid - PLUMgrid is involved in virtual networking and SDN/NFV to deliver cloud infrastructure solutions that transform businesses. Romana - The Romana Project - Installation scripts, documentation, issue tracker and wiki. Start here. SnapRoute - SnapRoute is an open networking stack company. Cloud-Native Storage Volume Drivers/Plugins,Local Storage Management,Remote Storage Access Sandbox CNCF Projects Rook - File, Block, and Object Storage Services for your Cloud-Native Environments CNCF Member Products/Projects Ceph - Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability. Container Storage Interface - Container Storage Interface (CSI) Specification. Dell EMC - IT and Workforce Transformation. Made real every day. Diamanti - Diamanti is the first container platform with plug and play network and persistent storage that seamlessly integrates the most widely adopted software stack - standard open source Kubernetes and Docker - so there is no vendor lock-in. QoS on network and storage maximizes container density. Gluster - Gluster is free and open source softeare scalable network filesystem. Hatchway - Persistent Storage for Cloud Native Applications Kasten - Kasten is on a mission to dramatically simplify operational management of stateful cloud-native applications. Manta - Structural variant and indel caller for mapped sequencing data Minio - Minio is a high performance distributed object storage server, designed for large-scale private cloud infrastructure. Minio is widely deployed across the world with over 64.2M+ docker pulls. NetApp - NetApp HCI. All New and Available Now. OpenEBS - OpenEBS is an open source storage platform that provides persistent and containerized block storage for DevOps and container environments. Portworx - The Solution for Stateful Containers in Production. Designed for DevOps. Rex-Ray - REX-Ray is an open source, storage management solution designed to support container runtimes such as Docker and Mesos. StorageOS - Enterprise persistent storage for containers and the cloud. Non-CNCF Member Products/Projects Datera - Datera is an application-driven data infrastructure company. Hedving - Modern storage for the modern business. Infinit - The Elle coroutine-based asynchronous C++ development framework. LeoFS - The LeoFS Storage System OpenIO - OpenIO Software Defined Storage Pure Storage - Pure Storage is an all-flash enterprise storage company that enables broad deployment of flash in data centers. Quobyte - Data Center File System. Fast and Reliable Software Storage Robin Systems - Data-Centric Compute and Storage Containerization Infrastructure Software Sheepdog - Distributed Storage System for QEMU Springpath - Springpath is hyperconvergence software that turns standard servers of choice into a single pool of compute and storage resources. Swift - OpenStack Storage (Swift) Container Runtime The new CF Container Runtime gives you more granular control and management of containers with Kubernetes. Incubating CNCF Projects containerd rkt - rkt is a pod-native container engine for Linux. It is composable, secure, and built on standards. CNCF Member Products/Projects CRI-O - Open Container Initiative-based implementation of Kubernetes Container Runtime Interface Intel Clear Containers - OCI (Open Containers Initiative) compatible runtime using Virtual Machines Ixd - Daemon based on liblxc offering a REST API to manage containers Pouch - Pouch is an open-source project created to promote the container technology movement. runc - CLI tool for spawning and running containers according to the OCI specification SmartOS - Converged Container and Virtual Machine HypervisorNon-CNCF Member Products/Projects Kata Containers - Kata Containers runtimes RunV - Hypervisor-based Runtime for OCI Singularity - Singularity: Application containers for Linux Orchestration & Management Scheduling & Orchestration Graduated CNCF Projects Kubernetes - Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications CNCF Member Products/Projects ECS - Amazon Web Services provides information technology infrastructure services to businesses in the form of web services. Docker Swarm - Swarm: a Docker-native clustering system Microsoft Azure Service Fabric - Service Fabric is a distributed systems platform for packaging, deploying, and managing stateless and stateful distributed applications and containers at large scale. Non-CNCF Member Products/Projects Mesos - Mirror of Apache Mesos Nomad - Nomad is a flexible, enterprise-grade cluster scheduler designed to easily integrate into existing workflows. Coordination & Service Discovery Incubating CNCF Projects CoreDNS - CoreDNS is a DNS server that chains plugins. CNCF Member Products/Projects ContainerPilot - A service for autodiscovery and configuration of applications running in containers etcD - Distributed reliable key-value store for the most critical data of a distributed system VMware Haret - A strongly consistent distributed coordination system, built using proven protocols & implemented in Rust. Non-CNCF Member Products/Projects Apache Zookeeper - Apache ZooKeeper is an effort to develop and maintain an open-source server which enables highly reliable distributed coordination. Consul - Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. Eureka - AWS Service registry for resilient mid-tier load balancing and failover. SkyDNS - DNS service discovery for etcd SmartStack - A transparent service discovery framework for connecting an SOA Service Management Envoy - C++ front/service proxy gRPC - The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) Linkerd - Production-grade feature-rich service mesh for any platform 3Scale - 3scale api gateway reloaded Ambassador - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy Avi Networks - Avi Networks is a Silicon Valley startup with proven track record in building virtualization, networking and software solutions. Conduit - Ultralight service mesh for Kubernetes F5 - F5 Networks provides application delivery networking technology that optimizes the delivery of network-based applications. Heptio Contour - Contour is a Kubernetes ingress controller for Lyft's Envoy proxy. Kong - ? The Microservice API Gateway NGINX - application delivery for the modern web Open Service Broker API - Open Service Broker API Specification Turbine Labs - Turbine Labs Apache Thrift - Mirror of Apache Thrift Avro - Apache Avro Backplane - A service that unifies discovery, routing, and load balancing for web servers written in any language, running in any cloud or datacenter. HAProxy - The Reliable, High Performance TCP/HTTP Load Balancer Hystrix - Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable. Istio - An open platform to connect, manage, and secure microservices. Netflix Zuul - Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more. Open Policy Agent (OPA) - An open source project to policy-enable your service. Ribbon - Ribbon is a Inter Process Communication (remote procedure calls) library with built in software load balancers. Traefik - Træfik, a modern reverse proxy Vamp - Vamp - canary releasing and autoscaling for microservice systems. Application Definition & Development Database & Data Warehouse Incubating CNCF Projects Vitess - Vitess is a database clustering system for horizontal scaling of MySQL. CNCF Member Products/Projects Cloudhbase - Lightweight, embedded, syncable NoSQL database engine for iOS (and Mac!) apps. IBM DB2 - IBM is an IT technology and consulting firm providing computer hardware, software, and infrastructure and hosting services. Iguazio - iguazio's Continuous Analytics Data Platform has redesigned the data stack to accelerate performance in big data, IoT and cloud-native apps. Infinispan - Infinispan is an open source data grid platform and highly scalable NoSQL cloud data store. Microsoft SQL Server - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. MySQL - MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database. Oracle - Oracle is a computer technology corporation developing and marketing computer hardware systems and enterprise software products. RethinkDB - The open-source database for the realtime web. SQL Data Warehouse - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. YugaByte DB - YugaByteDB is a transactional, high-performance database for building distributed cloud services. It currently supports Redis API (as a true DB) and Cassandra API, with SQL coming very soon. Non-CNCF Member Products/Projects ArangoDB - ? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. BigchainDB - Meet BigchainDB. The blockchain database. CarbonData - Mirror of Apache CarbonData Cassandra - Mirror of Apache Cassandra CockroachDB - CockroachDB - the open source, cloud-native SQL database. Crate.io - CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. Druid - Column oriented distributed data store ideal for powering interactive applications. Hadoop - Mirror of Apache Hadoop MariaDB - MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MemSQL - A real-time data warehouse you can run everywhere MongoDB - MongoDB is a document database with the scalability and flexibility that you want with the querying and indexing that you need NomsDB - The versioned, forkable, syncable database OrientDB - OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text, Geospatial and Key-Value models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries. OrientDB Community Edition is Open Source using a liberal Apache 2 license. Pachyderm - Reproducible Data Science at Scale! Pilosa - Pilosa is an open source, distributed bitmap index that dramatically accelerates queries across multiple, massive data sets. PostgreSQL - PostgreSQL is a powerful, open source object-relational database system. Presto - Distributed SQL query engine for big data Qubole - Qubole delivers a Self-Service Platform for Big Data Analytics built on Amazon, Microsoft, Google and Oracle Clouds. Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, HyperLogLogs, Bitmaps. Scylla - NoSQL data store using the seastar framework, compatible with Apache Cassandra Snowflake - Snowflake is the only data warehouse built for the cloud. Software AG - Software AG provides business process management, data management, and consulting services worldwide. Starburst - Starburst (www.starburstdata.com) is the enterprise Presto company offering an SQL-on-Anything analytics platform. TiDB - TiDB is a distributed HTAP database compatible with the MySQL protocol. Vertica - Vertica Systems develops data management solutions for storing databases and allowing clients to conduct real-time and ad hoc queries. Streaming Incubating CNCF Projects NATS - High-Performance server for NATS, the cloud native messaging system. CNCF Member Products/Projects Amazon Kinesis - Amazon Web Services provides information technology infrastructure services to businesses in the form of web services. CloudEvents - CloudEvents Specification Google Cloud Dataflow - Google is a multinational corporation that is specialized in internet-related services and products. Heron - Heron is a realtime, distributed, fault-tolerant stream processing engine from Twitter Non-CNCF Member Products/Projects Apache Apex - Mirror of Apache Apex core Apache NiFi - Mirror of Apache NiFi Apache RocketMQ - Mirror of Apache RocketMQ Apache Spark - Mirror of Apache Spark Apache Storm - Mirror of Apache Storm Flink - Mirror of Apache Flink Kafka - Mirror of Apache Kafka Pulsar - Pulsar - distributed pub-sub messaging system RabbitMQ - RabbitMQ is the most widely deployed open source message broker. StreamSets - StreamSets DataCollector - Continuous big data ingest infrastructure. Source Code Management GitHub - GitHub is a web-based Git repository hosting service offering distributed revision control and source code management functionality of Git. GitLab - GitLab CE | Please open new issues in our issue tracker on GitLab.com Visual Studio Team Services - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. Bitbucket - Atlassian provides collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. Application Definition Bitnami - Loved by Devs, Trusted by Ops. Easy to use cloud images, containers, and VMs that work on any platform Docker Compose - Define and run multi-container applications with Docker Habitat - Modern applications with built-in automation OpenAPI - The OpenAPI Specification Repository Telepresence - Local development against a remote Kubernetes or OpenShift cluster Apache Brooklyn - Apache Brooklyn KubeVirt - A virtualization API and runtime add-on for Kubernetes in order to define and manage virtual machines. Packer - Packer is a tool for creating identical machine images for multiple platforms from a single source configuration. CI & CD Continuous integration and continuous delivery are two approaches to software development that are designed to improve code quality and enable rapid delivery and deployment of code. CNCF Member Products/Projects Argo - Get stuff done with container-native workflows for Kubernetes. Cloud 66 Skycap - Ops tools for Devs. Build, deliver, deploy and manage any applications on any cloud or server. Cloudbees - CloudBees offers CloudBees Jenkins Enterprise, an enterprise-grade continuous delivery platform powered by Jenkins. Codefresh - Codefresh is a continuous delivery and collaboration platform for containers and microservices. Codeship - CloudBees offers CloudBees Jenkins Enterprise, an enterprise-grade continuous delivery platform powered by Jenkins. Concourse - BOSH release and development workspace for Concourse ContainerOps - DevOps Orchestration Platform Habitus - A Build Flow Tool for Docker Runner - GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. Weave Flux - A tool for deploying container images to Kubernetes services Wercker - The Wercker CLI can be used to execute pipelines locally for both local development and easy introspection. Non-CNCF Member Products/Projects Appveyor - Appveyor Systems Inc. aim is to give powerful continuous integration and deployment tools to every .NET developer. Bamboo - Atlassian provides collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. BuddyBuild - Buddybuild is a Vancouver-based app tools company focused on continuous integration and debugging tools. Buildkite - The Buildkite Agent is an open-source toolkit written in Golang for securely running build jobs on any device or network CircleCI - CircleCI provides software teams the confidence to build, test, and deploy across numerous platforms. Distelli - True Continuous Delivery from Source Control to Servers. Drone - Drone is a Continuous Delivery platform built on Docker, written in Go Jenkins - Build great things at any scale Octopus Deploy - Octopus Deploy is a user-friendly release management OpenStack Zuul CI - The Gatekeeper, or a project gating system Semaphore - Hosted continuous integration and deployment service Shippable - Shippable helps companies ship code faster by giving them a powerful continuous integration platform built natively on Docker. Solano Labs - Continuous Integration & Deployment Spinnaker - Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Travis - The Ember web client for Travis CI XL Deploy - XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software. Platform Certified Kubernetes - Distribution Apprenda Kismatic Enterprise Toolkit (KET) Appscode Pharmer Caicloud Compass Canonical Distribution of Kubernetes Cloud Foundry Container Runtime CoreOS bootkube DaoCloud Enterprise Diamanti Converged Container Infrastructure Docker EE/CE Ghostcloud EcOS Giant Swarm Managed Kubernetes Google Kube-Up Heptio Quickstart for Kubernetes IBM Cloud Private inwinSTACK kube-ansible Joyent Triton for Kubernetes kube-spawn Kublr Loodse Kubermatic Mesosphere Kubernetes on DC/OS Mirantis Cloud Platform Netease Container Service Dedicated OpenShift Oracle Linux Container Services Oracle Terraform Kubernetes Installer Pivotal Container Service (PKS) Platform9 Managed Kubernetes QFusion Rancher Samsung Kraken StackPointCloud SUSE CaaS Platform Tectonic VMWare Pivotal Container Service (PKS) Weaveworks kubeadm WiseCloud Typhoon Certified Kubernetes - Platform Alauda EE Alibaba Cloud Container Service Azure (ACS) Engine Azure Container Service (AKS) Baidu Cloud Container Engine BoCloud BeyondcentContainer Cisco Container Platform EasyStack Kubernetes Service (EKS) eKing Cloud Container Platform Google Kubernetes Engine (GKE) HarmonyCloud Container Platform Hasura Huawei Cloud Container Engine (CCE) IBM Cloud Container Service Nirmata Managed Kubernetes Oracle Container Engine SAP Certified Gardener Tencent Cloud Container Service (CCS) TenxCloud Container Engine (TCE) ZTE TECS Non-Certified Kubernetes Amazon Elastic Container Service for Kubernetes (EKS) Cloud 66 Maestro Containership Gravitatonal Telekube Huawei FusionStage Navops Supergiant goPaddle Stratoscale Symphony PaaS & Container Service Apcera - Ericsson is a technology company that provides and operates telecommunications networks, television and video systems, and related services. Cloud Foundry Application Runtime - Cloud Foundry Application Runtime utilizes containers as part of its DNA, and has since before Docker popularized containers. The new CF Container Runtime gives you more granular control and management of containers with Kubernetes. Datawire - An early stage startup that's focused on making it easy for developers to build resilient microservices. Exoscale - Exoscale is the cloud hosting platform for SaaS companies, developers and systems administrators. Galactic Fog - Build Future-Proof Applications. Simplify integration. Run applications anywhere. Adapt to changes instantly. Heroku - Salesforce is a global cloud computing company that develops CRM solutions and provides business software on a subscription basis. Atomist - Atomist make microservice applications Easy and fun to build, through a cloud-based service. Clouber - Clouber is a provider of mCenter, an Application Modernization/Migration / Management platform across hybrid (public and private) clouds. Convox - Launch a Private Cloud in Minutes. Empire - A PaaS built on top of Amazon EC2 Container Service (ECS) Flynn - A next generation open source platform as a service (PaaS Hyper - Hyper.sh is a secure container cloud service. JHipster - Open Source application generator for creating Spring Boot + Angular projects in seconds! Kontena - The developer friendly container and micro services platform. Works on any cloud, easy to setup, simple to use. Lightbend - Lightbend (formerly Typesafe) is dedicated to helping developers build Reactive applications on the JVM. No Code - The best way to write secure and reliable applications. Write nothing; deploy nowhere. PaaSTA - An open, distributed platform as a service Platform.sh - Platform.sh is an automated, continuous-deployment high-availability cloud hosting solution Portainer - Simple management UI for Docker Scalingo - Scalingo a Docker Platform Service Transform your code as Docker container & run it on our cloud, making it instantly available & scalable. Tsuru - Open source, extensible and Docker-based Platform as a Service (PaaS). Serverless Security PureSec - PureSec is the world's leading Serverless Security Runtime Environment Snyk - Snyk is a security company helping to monitor app vulnerabilities. Libraries Python Lambda - A toolkit for developing and deploying serverless Python code in AWS Lambda. Tools Architect - ? cloud function signatures for http handlers, pubsub, scheduled functions and table triggers Dashbird - AWS Lambda monitoring & debugging platform. Serverless observability & troubleshooting. Serverless monitoring. IOpipe - IOpipe provides a toolbox for developing, monitoring, and operating serverless applications. Microcule - SDK and CLI for spawning streaming stateless HTTP microservices in multiple programming languages Node Lambda - Command line tool to locally run and deploy your node.js application to Amazon Lambda Stackery - Run serverless in production with Stackery's serverless operations console. Thundra - IT Alert and Notifications Management Frameworks AWS Chalice - Python Serverless Microframework for AWS SAM Local - AWS Serverless Application Model (AWS SAM) prescribes rules for expressing Serverless applications on AWS. Serverless - Serverless Framework – Build web, mobile and IoT applications with serverless architectures using AWS Lambda, Azure Functions, Google CloudFunctions & more! Spring Cloud Function - Pivotal is a software company that provides digital transformation technology and services. Apex - Build, deploy, and manage AWS Lambda functions with ease (with Go support!). Bustle Shep - A framework for building JavaScript Applications with AWS API Gateway and Lambda ClaudisJS - Deploy Node.js projects to AWS Lambda and API Gateway easily Dawson - A serverless web framework for Node.js on AWS (CloudFormation, CloudFront, API Gateway, Lambda) Flogo - Ultralight Edge Microservices Framework Gordon - λ Gordon is a tool to create, wire and deploy AWS Lambdas using CloudFormation GunIO Zappa - Serverless Python KappaIO - What precedes Lambda Mitoc Group Deep - Full-stack JavaScript Framework for Cloud-Native Web Applications (perfect for Serverless use cases) Sparta - A GO FRAMEWORK FOR AWS LAMBDA Platforms CNCF Member Products/Projects AWS Lambda - Amazon Web Services provides information technology infrastructure services to businesses in the form of web services. Azure Functions - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. Google Cloud Functions - https://cloud.google.com/functions/ IBM Cloud Functions - IBM is an IT technology and consulting firm providing computer hardware, software, and infrastructure and hosting services. Twilio Functions - Twilio is a cloud communication company that enables users to use standard web languages to build voice, VoIP, and SMS apps via a web API. Non-CNCF Member Products/Projects Algorithmia - Algorithmia is an open marketplace for algorithms, enabling developers to create tomorrows smart applications today. Apache OpenWhisk - Apache OpenWhisk is a serverless event-based programming service and an Apache Incubator project. AppScale - AppScale is an easy-to-manage serverless platform for building and running scalable web and mobile applications on any infrastructure. Clay - Rapid Prototyping for Developers Hyper Func - Hyper.sh is a secure container cloud service. Iron.io - Iron.io is a scalable cloud-based message queue and processing platform for building distributed cloud applications. Nano Lambda - Explore deploying code in lambda.Run server-side code with an API call. Overclock - Overclock Labs develops protocols, tools, and infrastructure to make foundational elements of the internet open, decentralized, and simple OVH Functions - OVH.com is an independent French company that offers web, dedicated, and cloud hosting solutions. PubNub Functions - The PubNub Data Stream Network enables mobile and web developers to build and scale realtime apps. Spotinst Functions - Our SaaS optimization platform delivers significant cost reduction for AWS and GCE, while maintaining high availability and performance. StdLib - StdLib Service Creation, Deployment, and Management Tools Syncano - A serverless application platform to build powerful realtime apps more efficiently. Weblab - Microservices at your fingertips Webtask - Webtasks is a simple, lightweight, and secure way of running isolated backend code that removed or reduces the need for a backend. Zeit Now - Now – Realtime Global Deployments Hybrid Platforms Galactic Fog Gestalt - Build Future-Proof Applications. Simplify integration. Run applications anywhere. Adapt to changes instantly. Nuclio - High-Performance Serverless event and data processing platform Binaris - A high-performance serverless platform for interactive and real-time applications. Cloudboost - One Complete NoSQL Database Service for your app. Fn - The container native, cloud agnostic serverless platform. fx - fx is a tool to help you do Function as a Service with painless on your own servers LunchBadger - LunchBadger is a multi-cloud platform for microservices and serverless. Kubernetes-Native Platforms Fission - Fast Serverless Functions for Kubernetes Oracle Application Container Cloud - Oracle is a computer technology corporation developing and marketing computer hardware systems and enterprise software products. Riff - riff is for functions Funktion - a CLI tool for working with funktion Kubeless - Kubernetes Native Serverless Framework OpenFAAS - OpenFaaS - Serverless Functions Made Simple for Docker & Kubernetes OpenLambda - An open source serverless computing platform PubNub - The PubNub Data Stream Network enables mobile and web developers to build and scale realtime apps. Observability & Analysis Monitoring CNCF Member Products/Projects Prometheus - The Prometheus monitoring system and time series database. Amazon CloudWatch - Amazon Web Services provides information technology infrastructure services to businesses in the form of web services. Datadog - Datadog offers a cloud-scale monitoring service. Dynatrace - Dynatrace transform how Web and non-Web business-critical applications are monitored, managed, and optimized throughout their lifecycle. Google Stackdriver - Google is a multinational corporation that is specialized in internet-related services and products. Grafana - The tool for beautiful monitoring and metric analytics & dashboards for Graphite, InfluxDB & Prometheus & More InfluxDB - Scalable datastore for metrics, events, and real-time analytics Instana - Instana is an APM solution that automatically monitors dynamic modern apps. Lighstep - LightStep's mission is to cut through the scale and complexity of today's software to help organizations stay in control of their systems. Log Analytics - Microsoft is a software corporation that develops, manufactures, licenses, supports, and sells a range of software products and services. Netsil - Observability and Monitoring for Modern Cloud Applications SignalFX - Advanced monitoring platform for modern applications Snap - A powerful open telemetry framework.Easily collect, process, and publish telemetry data at scale. SysDig - Linux system exploration and troubleshooting tool with first class support for containers Weave Cloud - Weaveworks provides a simple and consistent way to connect and manage containers and microservices. Non-CNCF Member Products/Projects AppDynamics - AppDynamics develops application performance management (APM) solutions that deliver problem resolution for highly distributed applications. AppNeta - AppNeta is the only app performance monitoring company with solutions for apps you develop, SaaS apps you use & networks that deliver them. Axibase - Purpose-built solution for analyzing and reporting on massive volumes of time-series data collected at high frequency. Catchpoint Systems - Catchpoint is a leading digital experience intelligence company. Centreon - Centreon is a network, system, applicative supervision and monitoring tool. Cobe - Cobe delivers an aggregated view of every element related to your business. CoScale - Full stack performance monitoring. Built for container and microservices applications. Powered by anomaly detection. Graphite - A highly scalable real-time graphing system Honeybadger - Exception, uptime, and performance monit. Icinga - Monitoring as code IronDB - Realtime Monitoring and Analytics Librato - Real time operations analytics for metrics from any source Meros - Meros is creating enterprise monitoring and management tools for Docker Nagios - The Industry Standard In IT Infrastructure Monitoring. New Relic - New Relic is a leading digital intelligence company, delivering full-stack visibility and analytics to enterprises around the world. NodeSource - Building products focused on Node.js security and performance for the Enterprise. OpBeat - Opbeat is joining forces with Elastic. OpenTSDB - A scalable, distributed Time Series Database. OpsClarity - Intelligent Monitoring for Modern Applications and Data Infrastructure Outlyer - Infrastructure monitoring platform made for DevOps and microservices. Rocana - Rocana is a San Francisco, CA-based provider of root cause analysis software company Sensu - Monitoring for today's infrastructure. Sentry - Sentry is a cross-platform crash reporting and aggregation platform. Server Density - Monitoring agent for Server Density (Linux, FreeBSD and OS X) StackRox - StackRox delivers the industry's only adaptive threat protection for containers. StackState - The market-leading Algorithmic IT Operations platform Tingyun - Observability and Analysis, Monitoring Wavefront - Wavefront is a hosted platform for ingesting, storing, visualizing and alerting on time series data. Zabbix - The Ultimate Enterprise - class Monitoring Platform Logging Fluentd - Fluentd: Unified Logging Layer (project under CNCF) Humio - Log everything, answer anything Splunk - Splunk provides operational intelligence software that monitors, reports, and analyzes real-time machine data. Elastic - Open Source, Distributed, RESTful Search Engine. Graylog - Free and open source log management Loggly - Loggly parses your log files, shows you the code in GitHub which caused the log errors. 10,000+ customers, including 1/3 of the Fortune 500. Logz - Logz.io is an enterprise-grade ELK as a service with alerts, unlimited scalability, and predictive fault detection. Loom Systems - Predict & Prevent Problems in the Digital Business Sematext - Sematext is a Search and Big Data analytics products and services company. Sumo Logic - Sumo Logic, a log management and analytics service, transforms big data into sources of operations, security and compliance intelligence. Tracing Jaeger - CNCF Jaeger, a Distributed Tracing System OpenTracing - OpenTracing API for Go Spring Cloud Sleuth - Distributed tracing for spring cloud Appdash - Application tracing system for Go, based on Google's Dapper. SkyWalking - A distributed tracing system, and APM ( Application Performance Monitoring ) Zipkin - Zipkin is a distributed tracing system Contribute Contributions are most welcome, please adhere to the contribution guidelines. ⬆ back to top License This work is licensed under a Creative Commons Attribution 4.0 International License.
CentOS7 安装并使用Ovirt 4.2 Ovirt 4.2 安装 参考:http://blog.csdn.net/wh211212/article/details/77413178参考:http://blog.csdn.net/wh211212/article/details/79412081(需要使用) 环境准备,两台主机 禁用selinux,关闭防火墙10.1.1.2 (ovirt-engine+GlusterFS) 10.1.1.3 (GlusterFS+nfs) hosts设置 10.1.1.2 ovirt.aniu.so server1 10.1.1.3 nfs.aniu.so docker.aniu.so server2 Ovirt官网文档: http://www.ovirt.org/documentation/ oVirt安装 yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm yum -y install ovirt-engine 安装过程全部使用默认,建议使用默认 在两台主机server1,server2上安装ovirt node yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm yum -y install vdsm 配置Ovirt 安装完成,通过浏览器访问https://ovirt.aniu.so/ovirt-engine/ 登录ovirt UI,用户名 admin,密码是安装过程中设置的密码 使用Ovirt创建虚拟机 创建数据中心 存储类型选择共享的,类型选择本地,每个数据中心下面只能添加一个主机,不采用这种方式 假如有多个数据中心,创建集群的时候选择在那个数据中心下面创建,根据使用选择CPU架构,其他默认即可 添加主机时注意关闭自动配置防火墙选项,在高级设置里面,使用root账号 密码即可,添加主机过程可以查看,事件查看安装过程 查看添加完成的主机 添加nfs data存储域,用于创建虚拟机 标注的地方都需要修改,注意根据自己的配置填入对应的 添加iso存储域,用于存放镜像文件 添加glusterfs data 存储域,高可用 用于创建虚拟机 添加系统镜像文件 # 使用命令 先把镜像文件上传到服务器上,执行上传命令 engine-iso-uploader --nfs-server=nfs.aniu.so:/export/iso upload /usr/local/src/CentOS-7-x86_64-Minimal-1611.iso # 或者通过filezilla上传到服务的 data存储域目录下。然后到移动到正确的位置 创建虚拟机 添加硬盘的时候可以选择不同的data存储域 运行虚拟机 这里笔者安装ovirt-engine的服务器安装了桌面环境,然后通过VNC远程进行虚拟的安装,不安装系统桌面时,笔者配置完虚拟机运行后,通过console不能连上去,会让下载vv格式的文件,很烦,安装桌面配置VNC笔者这里不过多赘述 虚拟机在线迁移 迁移的时候选择要迁移到的主机,注意:不同数据中心下面的虚拟机不能迁移 ovirt备份 参考:https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ engine-backup --scope=all --mode=backup --file=ovirt-backup.txt --log=/var/log/ovirt-engine/ovirt-engine.log 笔者安装配置遇到的问题: 存储域添加完成后不知道如何删除
To enable Oozie web console install the Ext JS library. 参考:http://cdh01.aniu.so:11000/oozie/docs/DG_QuickStart.html YARN (MR2 Included) 管理界面 及 Web UI ResourceManager Web UI HistoryServer Web UI[站外图片上传中...(image-47c126-1513138023093)] Zookeeper 管理界面 笔者这里zookeeper安装的时候选择的默认,因此只安装了一个zookeeper,但个人感觉后期应该需要增加zookeeper的界面数量 下面开始说安装的注事事项 1、配置环境要符合要求,要纯净的系统环境 # 笔者环境 # CM env 192.168.1.137 cdh01.aniu.so CentOS6.9 16G Memory 100G LVM卷 (Manger 节点) 192.168.1.148 cdh02.aniu.so CentOS6.9 4G Memory 70G LVM卷 192.168.1.149 cdh03.aniu.so CentOS6.9 4G Memory 70G LVM卷 192.168.1.150 cdh04.aniu.so CentOS6.9 4G Memory 70G LVM卷 建议小白参考笔者的环境配置,主机名可以自定义 #对四个节点的系统进行更新,安装开发工具包 yum update -y && yum -y groupinstall "Development Tools" 2、关闭防火墙、禁用Selinux # 关闭防火墙 /etc/init.d/iptables stop && /etc/init.d/ip6tables stop chkconfig iptables off && chkconfig ip6tables off # 建议采用修改内核参数的方式关闭ip6tables vim /etc/modprobe.d/dist.conf # 编辑此文件,在最后加入: # Disable ipv6 alias net-pf-10 off alias ipv6 off # 禁用selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0 # 不重启临时生效 3、内核参数调整 # 内存参数调整 sysctl -w vm.swappiness=10 或者 编辑vim /etc/sysctl.conf,在最后加入: vm.swappiness = 10 编辑启动项vim /etc/rc.local,最后加入: echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled 注:上面所有操作在所有节点都需要执行 4、所有节点间配置免密认证 # CM节点执行 ssh-keygen -t rsa -b 2048 # 有确认提示,一直按回车即可 cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys # 笔者 hosts.conf # CM node 192.168.1.137 cdh01.aniu.so 192.168.1.148 cdh02.aniu.so 192.168.1.149 cdh03.aniu.so 192.168.1.150 cdh04.aniu.so # 同步密钥 for ip in $(awk '{print $1}' hosts.conf );do scp ~/.ssh/authorized_keys root@$ip:/root/.ssh ;done ssh-copy-id root@cdh01.aniu.so ssh-copy-id root@cdh02.aniu.so ssh-copy-id root@cdh03.aniu.so ssh-copy-id root@cdh04.aniu.so # 上面操作也需要在所有节点执行 5、使用cloudera-manger repo安装CM # 在CM节点执行 wget http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/cloudera-manager.repo -P /etc/yum.repos.d wget https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo -P /etc/yum.repos.d yum clean all && yum makecache # 建议执行不强制 yum install oracle-j2sdk1.7 -y yum install cloudera-manager-daemons cloudera-manager-server -y # 在其他节点执行 wget http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/cloudera-manager.repo -P /etc/yum.repos.d yum install oracle-j2sdk1.7 -y # 配置JAVA_HOME 编辑vim /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 保存退出执行: source /etc/profile 使更改的环境变量生效 # 在所有节点执行配置JAVA_HOME的操作 6、CM节点安装数据库,或使用已有的数据 # 笔者使用mysql57-community.repo,安装的mysql [mysql57-community] name=MySQL 5.7 Community Server baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/6/$basearch/ enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql yum install mysql-community-embedded mysql-community-server mysql-community-devel mysql-community-client -y # 笔者my.cnf [root@cdh01 yum.repos.d]# cat /etc/my.cnf [client] port = 3306 socket = /var/lib/mysql/mysql.sock [mysqld] datadir = /opt/mysql socket = /var/lib/mysql/mysql.sock #skip-grant-tables skip-ssl disable-partition-engine-check port = 3306 skip-external-locking key_buffer_size = 16M max_allowed_packet = 1M table_open_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache_size = 8 query_cache_size = 8M tmp_table_size = 16M performance_schema_max_table_instances = 500 explicit_defaults_for_timestamp = true max_connections = 500 max_connect_errors = 100 open_files_limit = 8192 log-bin=mysql-bin binlog_format=mixed server-id = 1 expire_logs_days = 10 early-plugin-load = "" default_storage_engine = InnoDB innodb_file_per_table = 1 innodb_data_home_dir = /opt/mysql innodb_data_file_path = ibdata1:1024M;ibdata2:10M:autoextend innodb_log_group_home_dir = /opt/mysql innodb_buffer_pool_size = 16M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_log_files_in_group = 3 innodb_buffer_pool_size = 12G innodb_log_file_size = 512M innodb_log_buffer_size = 256M innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 150 innodb_open_files = 600 innodb_max_dirty_pages_pct = 50 innodb_file_per_table = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer_size = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] xinteractive-timeout symbolic-links=0 slow_query_log long_query_time = 5 slow_query_log_file = /var/log/mysql-slow.log log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid # 初始化mysql,并设置启动数据库设置root密码 /usr/sbin/mysqld --initialize --user=mysql --socket=/var/lib/mysql/mysql.sock # 先执行 mysql_secure_installation # 再执行 # 创建CM启动用到的数据库 mysql -u root -pAniuops123. -e "create database cmf DEFAULT CHARACTER SET utf8;" mysql -u root -pAniuops123. -e "GRANT ALL PRIVILEGES ON `cmf`.* TO 'cmf'@'localhost' IDENTIFIED BY 'Aniunas123.'";" 启动cloudera-scm-server,并配置parcel # 生成db配置文件 /usr/share/cmf/schema/scm_prepare_database.sh mysql cmf cmf Aniucmf123. # 启动cloudera-scm-server /etc/init.d/cloudera-scm-server start # 查看启动日志 # 配置parcel离线 cd /opt/cloudera/parcel-repo/ # 然后下载 wget http://archive.cloudera.com/cdh5/parcels/latest/CDH-5.13.1-1.cdh5.13.1.p0.2-el6.parcel wget http://archive.cloudera.com/cdh5/parcels/latest/CDH-5.13.1-1.cdh5.13.1.p0.2-el6.parcel.sha1 wget http://archive.cloudera.com/cdh5/parcels/latest/manifest.json # 注:读者根据cloudera当前CDH最新版本更改下载用到的URL mv CDH-5.13.1-1.cdh5.13.1.p0.2-el6.parcel.sha1 CDH-5.13.1-1.cdh5.13.1.p0.2-el6.parcel.sha # 强制执行、默认使用本地的parcels包,不更改sha1,cloudera-scm-server启动安装时会去cloudera官网找匹配的parcel安装包 重启cloudera-scm-server,查看实时日志 /etc/init.d/cloudera-scm-server restart tailf /var/log/cloudera-scm-server/cloudera-scm-server.log 通过CM管理界面安装CDH,注意事项 # CM server启动成功即可通过http://192.168.1.137:7180访问,默认账户密码:admin admin # **重点内容** 下面的话很重要: 不要勾选:单用户模式 ,笔者在此模式下安装多次都没成功,有心人可以测试 能一次性安装成功的最好,安装不成功建议多试几次,对初始化完成的虚拟机进行快照操作,便于恢复笔者需要维护线上的hadoop集群环境,考虑在本地搭建一套类似的hadoop集群,便于维护与管理。 Cloudera 简介 经过搜索发现Cloudera产品很适合笔者当前需求,于是开始研究Cloudera(CDH)的安装与使用,参考: Cloudera 官网:https://www.cloudera.com Cloudera 官方文档: https://www.cloudera.com/documentation/enterprise/latest.html CDH是Apache Hadoop和相关项目的最完整,经过测试的流行发行版。 CDH提供了Hadoop的核心元素 - 可扩展的存储和分布式计算 - 以及基于Web的用户界面和重要的企业功能。 CDH是Apache许可的开放源码,是唯一提供统一批处理,交互式SQL和交互式搜索以及基于角色的访问控制的Hadoop解决方案。 Cloudera作为一个强大的商业版数据中心管理工具,提供了各种能够快速稳定运行的数据计算框架,如Apache Spark;使用Apache Impala做为对HDFS,HBase的高性能SQL查询引擎;也带了Hive数据仓库工具帮助用户分析数据; 用户也能用Cloudera管理安装HBase分布式列式NoSQL数据库;Cloudera还包含了原生的Hadoop搜索引擎以及Cloudera Navigator Optimizer去对Hadoop上的计算任务进行一个可视化的协调优化,提高运行效率;同时Cloudera中提供的各种组件能让用户在一个可视化的UI界面中方便地管理,配置和监控Hadoop以及其它所有相关组件,并有一定的容错容灾处理;Cloudera作为一个广泛使用的商业版数据中心管理工具更是对数据的安全决不妥协! CDH 提供: 灵活性 - 存储任何类型的数据,并使用各种不同的计算框架进行处理,包括批处理,交互式SQL,自由文本搜索,机器学习和统计计算。 集成 - 在一个可与广泛的硬件和软件解决方案配合使用的完整Hadoop平台上快速启动并运行。 安全 - 过程和控制敏感数据。 可扩展性 - 启用广泛的应用程序并进行扩展和扩展,以满足您的需求。 高可用性 - 充满信心地执行关键业务任务。 兼容性 - 利用您现有的IT基础设施和资源。 上述描述来自:https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_intro.html Cloudera Manager 介绍 Cloudera Manager可以轻松管理任何生产规模的Hadoop部署。通过直观的用户界面快速部署,配置和监控群集 - 完成滚动升级,备份和灾难恢复以及可定制警报。 Cloudera Manager作为Cloudera Enterprise的集成和支持部分提供。 参考:https://www.cloudera.com/documentation/enterprise/latest/topics/cm_intro_primer.html#concept_wfj_tny_jk 如下所示,Cloudera Manager的核心是Cloudera Manager Server。服务器托管管理控制台Web服务器和应用程序逻辑,负责安装软件,配置,启动和停止服务以及管理运行服务的集群。 Cloudera Manager Server与其他几个组件一起工作: agent - 安装在每台主机上。代理负责启动和停止进程,解包配置,触发安装和监视主机。 管理服务 - 由一组执行各种监视,警报和报告功能的角色组成的服务。 数据库 - 存储配置和监视信息。通常,多个逻辑数据库在一个或多个数据库服务器上运行。例如,Cloudera Manager Server和监视角色使用不同的逻辑数据库。 Cloudera存储库 - 由Cloudera Manager分发的软件存储库。 客户端 - 是与服务器交互的接口: 管理控制台 - 管理员用于管理集群和Cloudera Manager的基于Web的用户界面。 API - 与开发人员创建自定义Cloudera Manager应用程序的API。 安装Cloudera Manager和CDH 系统环境:CentOS6.9软件环境:Oracle JDK、Cloudera Manager Server 和 Agent 、数据库、CDH各组件 系统初始化(每个服务器都要做) # 关闭iptables、禁用selinux /etc/init.d/iptables stop && chkconfig iptables off sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config && setenforce 0 # 每台服务器之间设置免密认证 192.168.1.137 cdh.master.aniu.so master 192.168.1.148 cdh.node1.aniu.so node1 192.168.1.149 cdh.node2.aniu.so node2 192.168.1.150 cdh.node3.aniu.so node3 ## 注:在每台服务器配置hosts,master和node1/2/3代表服务器的主机名 # 设置swap参数 echo never > /sys/kernel/mm/transparent_hugepage/defrag #建议写到开启启动新里 sysctl -w vm.swappiness=0 # 建议写进sysctl.conf # 设置ntp同步服务器时间 */2 * * * * /usr/sbin/ntpdate 0.cn.pool.ntp.org >> /dev/null 2>&1 Cloudera安装步骤参考:https://www.cloudera.com/documentation/enterprise/latest/topics/installation_installation.html 阶段1:安装JDK(忽略) [Java SE 8 Downloads](http://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html) export JAVA_HOME=/usr/java/jdk.1.8.0_nn # java -version java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) # 注 此处不用安装JDK,因为CM源有封装好的jdk, 阶段2:设置数据库 # 使用mysql数据库,提前安装好mysql # mysql -u root -ppassword -e "create database cmf DEFAULT CHARACTER SET utf8;" # mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON `cmf`.* TO 'cmf'@'localhost' IDENTIFIED BY 'cmfpassword'";" 阶段3:安装Cloudera Manager服务器 # 配置cloudera-cdh源和cloudera-manager源 # cloudera-manager wget http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/cloudera-manager.repo # cloudera-cdh wget https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo # 安装jdk和cloudera-manager sudo yum install oracle-j2sdk1.7 -y sudo yum install cloudera-manager-daemons cloudera-manager-server -y 阶段4:启动CM服务并通过浏览器访问 # /etc/init.d/cloudera-scm-server restart Stopping cloudera-scm-server: [ OK ] Starting cloudera-scm-server: [ OK ] # 查看日志是否有报错,根据报错修改,然后再重新启动 tailf /var/log/cloudera-scm-server/cloudera-scm-server.log 浏览器访问:http://192.168.1.137:7180,用户名密码:admin admin 创建必需的数据库 # 参考:https://www.cloudera.com/documentation/enterprise/latest/topics/install_cm_mariadb.html # hive hue amon man nas navms oos create database metastore DEFAULT CHARACTER SET utf8; grant all on metastore.* TO 'hive'@'%' IDENTIFIED BY 'Aniuhive123.'; create database amon DEFAULT CHARACTER SET utf8; grant all on amon.* TO 'amon'@'%' IDENTIFIED BY 'Aniuamon123.'; create database hue DEFAULT CHARACTER SET utf8; grant all on hue.* TO 'hue'@'%' IDENTIFIED BY 'Aniuhue123.'; create database rman DEFAULT CHARACTER SET utf8; grant all on rman.* TO 'rman'@'%' IDENTIFIED BY 'Aniurman123.'; create database navms DEFAULT CHARACTER SET utf8; grant all on navms.* TO 'navms'@'%' IDENTIFIED BY 'Aniunavms123.'; create database nas DEFAULT CHARACTER SET utf8; grant all on nas.* TO 'nas'@'%' IDENTIFIED BY 'Aniunas123.'; create database oos DEFAULT CHARACTER SET utf8; grant all on oos.* TO 'oos'@'%' IDENTIFIED BY 'Aniuoos123.'; 集群更改设置 /etc/init.d/cloudera-scm-agent stop yum remove cloudera-manager-agennt-y find / -name clouder* | xargs rm -rf find / -name cmf* | xargs rm -rf # 把使用yum下载的相关包卸载干净。然后通过CDH manager 管理界面安装 参考链接: http://gepeiyu.com/2017/01/20/cloudera-chi-xian-an-zhuang/ http://blog.csdn.net/ymh198816/article/details/52423200
(一)苹果首席执行官 Cook: 1、 很多人都在谈论 AI,我并不担心机器人会像人一样思考,我担心人像机器一样思考! 2、 我们相信 AR 能够帮助人们工作,而且帮助人们在教育医疗有所突破,让世界更加美好 3、 科技本身并没有好坏之分,必须把科技赋于人性是每个人的责任,技术的好处是普惠于民 4、 我们将竭尽全力降低进入 App 生态圈的门槛 5、 必须为技术注入人性,将价值观注入到技术中 6、 充分利用历史机遇,赋于技术应有的价值,保持开放,有信任和创造力,才能实现对社会、家庭更美好的承诺 (二)谷歌 CEO 桑达尔·皮查伊 1、 以前是人来适应电脑,未来是电脑适应人 2、 6 个月前在乌镇举办阿尔法狗围棋比赛时,我们认为,这是 AI 发展历程中的里程碑式的重大事件 3、 谷歌也在转型,从移动到 AI。目前我们已经跨越到 AI 阶段 4、 计算将无处不在,无论是办公室和车里 5、 计算将以语音、视觉等形式进行,这些变化都可以让数字经济超越互联网 (三)马云 1、 过去 20 年互联网“从无到有”,未来 30 年互联网将会“从有到无”,后一个无是无处不在的无,没有人能离开互联网存在 2、 未来 30 年数据将成为生产资料,计算会是生产力,互联网是一种生产关系。如果我们不数据化,不和互联网相连,那么会比过去 30 年不通电显得更为可怕 3、 这几年几乎全球弥漫着一种对新技术时代和技术的担心之中,对网络空间、对数字经济与其担心,不如担当 4、 人类有灵魂、有信仰、要自信可以控制机器:机器没有灵魂、机器没有信仰,我们人类有灵魂、有信仰、有价值观、有独特的创造力,人类要有自信、相信我们可以控制机器 5、 互联网正在深入社会,超过未来一切技术革命的总和 6、 未来 30 年,制造业不再是带动就业的引擎,未来的制造业都将会是服务业,未来的服务业也必须是新型制造业 7、 数字经济将重塑世界经济,世界经济将会有新的模型,不仅仅是在中国,全世界都在进入一个新的时代 8、 第一次技术革命导致了第一次世界大战,第二次技术革命导致了第二次世界大战,第三次技术革命,也就是说第三次世界大战也将即将打响,但这不是一场国与国之间的战争,这是一场我们携手对抗疾病、贫穷和气候变化的战争 二、 下午互联网大会全体会议部分嘉宾致辞 (一)腾讯马化腾: 1、过去中国企业扮演新技术跟随者,今天要成为新技术驱动者 2、过去一年,数字经济是创新 快的经济活动,全球互联网公司都站在风口上,获得了高速发展。全球市值 大的公司里,7 家科技公司里包括 5 家互联网公司 3、新年代里,新产品的迭代速度以天为单位,大公司也是如此。过去中国企业扮演新技术跟随者,今天要成为新技术的驱动者 4、过去互联网企业是解决个人用户的痛点。未来,互联网企业将给各行各业赋能,解决全部痛点 5、 过去一年腾讯在新技术领域不断加码,坚持 AI 战略,在设立海外实验室、医学 AI、机器筛查医学影像、辅助诊断、糖尿病肺癌等领域,与更多的医院展开合作 6、 去年我提到了,在信息安全领域,我们在探索共同治理模式。今年我们启动了成长守护平台,该平台覆盖了腾讯 200 多个游戏产品 (二)百度李彦宏 1、 互联网人口红利结束了 2、 从金融到房产、教育、医疗等,能想到的产业都会因 AI 而发生变化,这是个伟大的时代,AI 堪比工业革命,期待 AI 能给每一个人带来新的惊喜 3、 十几年前互联网成长动力有 3 个:网民人数成长,上网时间的增加,网上的信息量不断增加 4、 当人口红利没有后,以 AI 的技术创新,将推动发展。当前互联网的 3 个成长动力:算法、算力、数据 5、 中国互联网独特的地方是,7 亿网民说同样的语言,遵守同样的法律,产生统一规则的数据,可以推动算法的创新,从而促进算力的提升。未来中国互联网发展主要的推动力就是 AI 6、 以前互联网公司基本以软件为主,今天,软件硬件和服务,三者要进行强结合,才能发挥效力。以汽车工业为例,现在会因 AI 产生新变化。无论是出行服务商,系统提供商,汽车制造商都将随之改变 (三)尤金·卡巴斯基 1、20 年前我刚成立卡巴斯基时,那是 1996 年,今年 2017 年,我们现在从 1997 年的时候有一年找 500 个恶意的软件,10 年以后也就是 2007 年的时候我们当时收集了 200 万个恶意的软件 2、 今年 2017 年,我们预计将会收集到 9000 万个新的恶意的样品,从 500 个到 200 万个到 9000 万个 3、 我们现在面临的迹象就是很复杂的一个互联网的形势 4、 我们周围现在已经被智能化的设备所包裹其中,很不幸的很多这些系统是非常脆弱 5、 我们非常依赖于网络,网络是被保护,但是保护的还不够,所以这就是我们经常说的网络恐怖主义,这也是个非常严重的问题 6、我们就是要开展合作,包括私营部门的合作,包括开展很多的行动,包括利用新的技术,让它变得更安全,所以我们应该一起努力 三、 下午发布 18 项世界互联网领先科技成果 (一)华为 3GPP 5G 预商用系统 1、华为 3GPP 5G 预商用系统,基于 3GPP 统一标准和规范 2,融合革命性新口技术、创新的上下行解耦技术以及全云化架构和端到端切片技术 3、 完成了从无线网、承载网、核心网、芯片、CPE 等端到端产品和解决方案的构建及测试验证 4、 在商用成熟度和产品性能等方面全面达到世界领先水平 5、 全方位构建能够支撑 2020 年 5G 真正商用目标的能力,为 “ 5G 时代”的到来打下坚实的技术基础 6、 华为已与全球多家运营商展开了联合测试,并以此为基础开展车联网、智能制造、互联网医疗等多个领域的探索和创新 (二)ARM 安全架构 1、 ARM 安全架构通过打造经济、可扩展、易于实施的安全框架,为物联网行业创造更加安全的设备奠定基础 2、 在物联网高速发展的时代,安全已不是可有可无的选项,整个行业都有责任保护我们身处的世界 3、 ARM 安全架构提供了一个基于行业 佳实践的框架,通过它可以在硬件和固件层面实现一致的安全设计,为制造更安全的设备提供了通用的规则和更加经济的方法 4、 它可以通过分析威胁模型,解决在案例中遇到的相似问题; 5、 通过架构为不同设备提供一致的功能和接口 6、 为终端客户提供多样性的选择,进而惠及物联网以及相关的技术和广大供应商 (三)微软人工智能小冰 1、 微软小冰已进入第五代,成长为一款能够进行情感计算,面向情商方向发展的人工智能机器人 2、 可以根据对话分析人的情感并及时作出分析,生成下一轮对话,让对话更加顺利地进行 3、 目前,全球小冰拥有超过 1 亿人类用户,对话数据超过 300 亿轮,进化速度仍在不断加快 4、 对用户而言,微软小冰已不止是一个人工智能机器人,更像是身边的伙伴与真人 5、微软小冰从中国出发,不断向外全球扩展,目前已在中国、日本、美国、印度、印度尼西亚五个国家共 14 个平台上落地,并担任电视栏目主持人、电台主持人、歌手等诸多色会角色。人工智能正在追赶着人类的想象 (四)北斗 1、 北斗是唯一具有短报文的系统 2、 远洋渔民的可靠选择 3、 厘米级高精度服务 4、 北斗 +,车联网平台, 480 万辆,事故率减少 50% ,时间节约 1/3 5、中国卫星产值 2000 亿+,北斗贡献 70% 6、 应用于民航海事通信三大领域,覆盖 50 个国家地区, 30 亿人 7、 北斗是中国的,也是世界的 (五)高通 5G 1、 芯片组实现的全球首个 5G 数据链接 2、 这项技术意味着 5G 新空口毫米波这项移动领域的全新前沿技术得以依托 5G 新空口标准实现 3、 将进一步提高用户体验并显著提高网络容量 4、 5G 调制解调器支持 60Hz 以下和毫米波频段,能为所有主要频谱类型和频段提供一个统一的 5G 设计,协助运营商开展早期 5G 试验和部署 5、 支持智能手机制造商在手机的功耗和尺寸要求下 6、 对 5G 技术进行早期测试和优化,助力 5G 手机的生产。 (六)神威太湖计算机 1、 连续 4 年第一 2、 以解决尖端科学问题为目标 3、 获得戈登贝尔奖,两项世界 高 4、 应用一:中科院软件所清华联合模拟的大气系统( 500 米);米) 5、 应用二:非线性大地震模拟 6、未来应用:能源气候制造业等 (七)量子计算 1、 量子处于多种状态,个纠缠(诡异互动); 、量子处于多种状态,个纠缠(诡异互动) 2、 操纵 50 个量子,相当于现有超级计算机 3、 单 光子源技术、多量的纠缠和控制 4、 光量子计算机研发成功,阿里 10 比特;纠缠领域可达 18 比特 (八)特斯拉 1、 发电系统存储应用 2、 发电系统:太阳能屋顶和光板 3、 存储装置: 16 个电池存储单元组成的网 4、 储能共享 5、 岛屿供电案例:特斯拉能源方替代传统 (九)滴滴 1、 交通向共享智能的方向发展 2、 出行订单 70% 在滴 3、 纽约东京还停留在传统,中国已经改变 4、 出租车和网约联合,将在金华试点 5、 滴滴代驾替酒驾 6、5 年发展,每个月服务 1.5 亿用户,服务 20 多个国家 7、 滴滴大脑分析每 15 分钟交通情况,供需预测和调度、建立虚拟车站 8、 滴滴拼车对地图和算法有高要求 9、 滴滴安全大脑 10 、滴滴出行平台变成智能交通 (十)摩拜 1、 全世界第一个基于物联网、移动互联网的自行车 2、 800 万辆摩拜单车 3、 管理平台:魔方,智能化调动 4、 与中国移动联合开发 sim 卡,和高通、华为合作提效率服务 5、 和陶氏化学合作,研发新材料 6、 9 月份,与联合国发起骑行日活动 (十一)阿里巴巴—— ET 大脑 1、 超级 人工智能,多维感知、时洞察逐步提升 2、 从单点智能到全局 3、 技术整合:语音识别、图像数据处理 4、 多元数据规模化处理与实时分析 5、 治理模式突破、服务产业发展 6、 应用:自动调配红绿灯,于急救 (十二) 百度 —— DUEROS 1、 听唤醒万物的核心要素:清、懂满足 2、 自然语言处理、多轮对话技术 3、 应用:智能家居、交通出行手机个人服务知识教育 4、 产业链:百度 DuerOS 、芯片商方案硬件 (十三)亚马逊 —— AWS IOT 1、 Thing s—— Cloud —— Intelligence 2、 两个问题:法规角度不允许 +XXX 3、 IOT 优势:快速响应、离线操作简化设备编程降低物联网用成本 (十四)苹果 —— ARKit 1、 AppStore 生态,创造 180 万个就业机会 2、 苹果 ARKit 硬件:摄像头、中央处理器和图形运动传感; 3、 优势:快速、稳定的运动跟踪边界和平面预测环境光线多模板支持 4、 体验川剧变脸 除了以上 14 项独立成果由嘉宾现场讲解外,组委会还联合发布了入围的 4 项先进技术,分别是: (十五) 腾讯人工智能开放平台 (十六) Watson 健康助力“健康中国” (十七) 下一代互联网关键技术 IPV6 (十八)机器触觉(Syn Touchinc)
蓝鲸相关软件包(V3.1.5 Beta)及加密证书(内测版本需申请http://bk.tencent.com/download/#ssl) V3.1.5 Beta V3.1.5 install_ce-1.0.11 ssl_certificates.tar.gz 相关安装需关注蓝鲸公众号获取最新版本及获取方式,生成证书参考社区教程 参考笔者前一篇蓝鲸安装使用文章:http://blog.csdn.net/wh211212/article/details/56847030?locationNum=2&fps=1 系统环境准备 aniu-saas-1 192.168.0.206 CentOS7 nginx,appt,rabbitmq,kafka,zk,es,bkdata,consul,fta aniu-saas-2 192.168.0.207 CentOS7 license,appo,kafka,zk,es,mysql,beanstalk,consul aniu-saas-3 192.168.0.208 CentOS7 paas,cmdb,job,gse,kafka,zk,es,consul,redis 这里注意:下载证书时, 需要同时填写部署 gse, license 的机器 MAC 地址。如果不放心,可以把三台服务器的mac地址都加上通过英文符号";"分割,建议安装的时候自信阅读官网文档 c7系统初始化配置 设置三台服务器间可以ssh免密登录,不过多介绍 关闭SElinux :sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 安装开发工具包: yum -y groupinstall "Development Tools" 安装epel源: rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm (后面安装rabbitmnq-server时会用到) 配置域名解析 job , paas , cmdb 的域名配置 DNS 解析, 域名解析对应的 A 记录要求填写 nginx 所在机器的 ip 地址, 配置 DNS 时要使浏览器能访问,同时部署的服务器上也能访问对应的域名 aniu-saas-1 (中控机)操作: 以下操作均在中控机执行:(会自动同步安装到另外两台) [root@aniu-saas-1 data]# ll total 1046960 -rw-r--r-- 1 root root 1069917253 Sep 30 16:11 bkce_src-3.1.5.tgz -rw-r--r-- 1 root root 2137009 Sep 30 16:11 install_ce-1.0.11.tgz -rw-r--r-- 1 root root 24757 Sep 30 16:11 ssl_certificates.tar.gz [root@aniu-saas-1 data]# tar xf bkce_src-3.1.5.tgz [root@aniu-saas-1 data]# tar xf install_ce-1.0.11.tgz [root@aniu-saas-1 data]# tar xf ssl_certificates.tar.gz -C ./src/cert/ 准备相关配置文件 部署所需的基本配置文件都在install目录下:参考配置如下: # aniu-saas-1 [root@aniu-saas-1 install]# cat install.config 192.168.0.206 nginx,appt,rabbitmq,kafka,zk,es,bkdata,consul,fta 192.168.0.207 license,appo,kafka,zk,es,mysql,beanstalk,consul 192.168.0.208 paas,cmdb,job,gse,kafka,zk,es,consul,plugin,redis 注:1. 该配置⽂件,要保证逗号前后没有空⽩字符,⾏末没有空⽩字符, ip 后⾯使⽤空格与服务名 称隔开(不能使⽤ tab ) 含有多个内⽹ ip 的机器, install.config 中使⽤ /sbin/ifconfig 输出中的第⼀个内⽹ ip 在 ip 后⾯写上该机器要安装的服务列表即可. nginx 与 cmdb 不能部署在同⼀台机器 gse 与 redis 需要部署在同⼀台机器上 gse 若需要跨云⽀持, gse 所在机器必须由外⽹ IP 增加机器数量时, 可以将以上配置中的服务挪到新的机器上. 要保证: kafka , es , zk 的每个组件的总数量为 3 根据实际情况修改global.env , ports.env - ports.env 中可以配置各项服务的端⼝信息 - globals.env 配置⽂件中, 设定域名,账号密码等信息, 强烈建议修改掉默认值 - global.env 中配置的域名,必须保证可以在服务器上被解析到, 建议使⽤ DNS 进⾏配置, 域名解析对应的 A 记录要求填写 nginx 所在机器的 ip 地址. 若⽆ DNS 服务, 则,需要在安装蓝鲸服务的机器上都配置 hosts , 把 paas , job , cmdb 的 域名都指向 nginx 所在 ip , globals.env [root@aniu-saas-1 install]# cat globals.env # vim:ft=sh # 产品信息含义 # PAAS 集成平台 # CMDB 配置平台 # JOB 作业平台 # GSE 管控平台 # BKDATA 数据平台 ## environment variables # 域名信息 export BK_DOMAIN="ops.aniu.so" # 蓝鲸根域名(不含主机名) export PAAS_FQDN="paas.$BK_DOMAIN" # PAAS 完整域名 export CMDB_FQDN="cmdb.$BK_DOMAIN" # CMDB 完整域名 export JOB_FQDN="job.$BK_DOMAIN" # JOB 完整域名 export APPO_FQDN="o.$BK_DOMAIN" # 正式环境完整域名 export APPT_FQDN="t.$BK_DOMAIN" # 测试环境完整域名 # DB 信息 export MYSQL_USER="root" # mysql 用户名 export MYSQL_PASS="@Aniudb123." # mysql 密码 export REDIS_PASS="@Aniuredis123." # redis 密码 # 账户信息(建议修改) export MQ_USER=admin export MQ_PASS=aniumq export ZK_USER=aniuzk export ZK_PASS='anwg123.' export PAAS_ADMIN_USER=admin export PAAS_ADMIN_PASS=anwg123. # 以下变量值不可以修改.每个企业统一 export IMAGE_NAME='bkbase/python:1.0' You have new mail in /var/spool/mail/root hosts 配置 # saas 192.168.0.206 aniu-saas-1 192.168.0.207 aniu-saas-2 192.168.0.208 aniu-saas-3 # aniu-saas 192.168.0.206 paas.ops.aniu.so job.ops.aniu.so cmdb.ops.aniu.so # 笔者的hosts配置文件 更改pip源 在aniu-saas-1上配置: # vi src/.pip/pip.conf [global] index-url = http://mirrors.aliyun.com/pypi/simple trusted-host = mirrors.aliyun.com 配置nginx repo # 在aniu-saas-1 aniu-saas-3 上配置 rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm 配置免密登陆 参考下面在任意一条服务器执行: $ ssh-keygen -t rsa -b 2048 (有确认提示,⼀直按回⻋即可) $ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys $ chmod 600 ~/.ssh/authorized_keys $ for ip in $(awk '{print $1}' install.config ); do > rsync -a ~/.ssh/authorized_keys root@$ip:/root/.ssh/; > done 开始正式安装 安装过程的输出说明 ⽩⾊: 普通输出 蓝⾊: 步骤说明 ⻩⾊: 警告消息, 可忽略 红⾊: 失败提示,或者错误提示 笔者使用集成方式安装: 以下步骤若有报错/失败, 需要根据提示修复错误后, 重新执⾏ $ ./bk_install base_service # 安装基础环境 $ ./bk_install bk_products # 安装蓝鲸主要产品, 并初始化数据. # 该步骤安装完成后, 可以通过浏览器打开蓝鲸了. cmdb, job 都应该能访问才算是正常 $ ./bk_install app_mgr # 安装 开发者中⼼的 App 管理器 # 该步骤安装完成后, 可以在开发者中⼼的 服务器信息 和 第三⽅服务信息, 中看到已经成功激活的服务 # 此步骤可能会提示安装Rabbitmq失败,解决方法: ** yum install erlang -y # 安装Rabbitmq-server需要的环境 ** $ ./bk_install gse_agent # 在所有机器上安装 gse_agent # 该步骤安装完成后, 可以在 CC 的资源池中看到安装蓝鲸的服务器 ip 列表,此步骤选择性执行,笔者执行的时候有些问题 笔者这里不介绍单步安装的方式,参考:http://www.cnblogs.com/Bourbon-tian/p/7607817.html 本地浏览器访问蓝鲸相关平台查看情况: 配置平台:http://cmdb.ops.aniu.so/ 工作台:http://paas.ops.aniu.so 初始安装工作台只有配置平台和作业平台,后面功能组件是笔者手动安装上去的 作业平台:http://job.ops.aniu.so/ 由于笔者之前安装过2.1版本的蓝鲸,因此这次安装过程比较顺利,建议初次尝试的同学,多阅读几遍官网安装文档,笔者后续会介绍蓝鲸的相关使用。
用户密码为安装sonar设置的用户名和密码 登录到sonar平台,设置 administration -security -user -administrator (右键,重新获取一个tokens,名字自定义) 右键复制 获取的tokens,然后去jenkins里面配置 sonar jenkins登录 -> configure system -> SonarQube servers 注:笔者安装的富足统一使用域名的方式访问。如果有需要,记得本地设置hosts 配置maven 编辑位于$ MAVEN_HOME / conf或〜/ .m2中的settings.xml文件,设置插件前缀和可选的SonarQube服务器URL <settings> <pluginGroups> <pluginGroup>org.sonarsource.scanner.maven</pluginGroup> </pluginGroups> <profiles> <profile> <id>sonar</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <!-- Optional URL to server. Default value is http://localhost:9000 --> <sonar.host.url> http://sonar.aniu.so # 填写自己的sonar服务器地址 </sonar.host.url> </properties> </profile> </profiles> </settings> 分析一个Maven项目 移动到一个maven项目目录内,执行下面命令 mvn clean verify sonar:sonar # 此段命令也可以在jenkins配置,如下: mvn 分析完成之后,登录sonar平台查看分析结果 从图中可以很明显的看出此项目存在347个BUG,然后给开发创建sonar账号,让他们帮忙修复。。。 相关报错解决 Sonargraph Integration: Skipping project aniu-api-product [tv.aniu:aniu-api-product], since no Sonargraph rules are activated in current SonarQube quality profile [SonarQube] 此报错暂时不影响maven 集成到 sonar上 413 Request Entity Too Large 原因是nginx默认上传文件的大小是1M,可nginx的设置中修改 解决方法如下: 1.打开nginx配置文件 nginx.conf, 路径一般是:/etc/nginx/nginx.conf。 2.在http{}段中加入 client_max_body_size 20m; 20m为允许最大上传的大小(大小可自定义)。 3.保存后重启nginx,问题解决。 sonar Failed to upload report - 500: An error has occurred Caused by: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (22790518 > 16777216). You can change this value on the server by setting the max_allowed_packet' variable. show variables like '%max_allowed_packet%'; 更改mysql 的max_allowed_packet参数,设置 max_allowed_packet = 64M ,然后重启mysql [mysqld] max_allowed_packet=32M https://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html 注:这里报错可以通过查看sonar的web.log得出原因。
CentOS6 安装并破解confluence Confluence 简介 confluence是一个专业的企业知识管理与协同软件,可以用于构建企业wiki。通过它可以实现团队成员之间的协作和知识共享。 Confluence官网:https://www.atlassian.com/software/confluence 安装环境准备 jdk1.8 mysql5.6 参考jira破解安装,这里笔者把Confluence和jira安装到同一台服务器,因此上面环境配置参考:http://blog.csdn.net/wh211212/article/details/76020723 为Confluence创建对应的数据库、用户名和密码 mysql -uroot -p'211212' -e "create database confluence default character set utf8 collate utf8_bin;grant all on confluence.* to 'confluence'@'%' identified by 'confluencepasswd';" # 根据自己的习惯,重新定义Confluence的用户名和密码 下载confluence安装文件及其破解包 Confluence下载:https://www.atlassian.com/software/confluence/downloads/binary/atlassian-confluence-6.3.1-x64.bin (当前最新版本) 链接:http://pan.baidu.com/s/1qXY29Fu 密码:1w9g # confluence-6.x 破解用jar包 这里建议直接在服务器上面通过wget下载Confluence安装文件,下载到本地的上传到服务器过程中有可能损坏安装文件导致不能正常安装 安装并破解confluence 安装confluence # 移动到confluence安装文件所在目录,执行下面命令进行安装: chmod +x atlassian-confluence-6.3.1-x64.bin sudo ./atlassian-confluence-6.3.1-x64.bin 通过上图可以看出confluence安装到了/opt/atlassian/confluence和/var/atlassian/application-data/confluence目录下,并且confluence默认监听的端口是8090.一路默认安装即可注:confluence的主要配置文件,为/opt/atlassian/confluence/conf/server.xml,和jira类似。此server.xml相当于tomcat中的server.xml配置文件 配置通过域名访问confluence 启动完成之后,通过ip地址访问confluence如下图 使用NGINX代理Confluence的请求 更改confluence的配置文件server.xml 更改前:<Context path="" docBase="../confluence" debug="0" reloadable="false"> 更改后:<Context path="/confluence" docBase="../confluence" debug="0" reloadable="false"> 设置url重定向 <Connector port="8090" connectionTimeout="20000" redirectPort="8443" maxThreads="48" minSpareThreads="10" enableLookups="false" acceptCount="10" debug="0" URIEncoding="UTF-8" protocol="org.apache.coyote.http11.Http11NioProtocol" proxyName="wiki.aniu.so" proxyPort="80"/> 配置nginx server { listen wiki.aniu.so:80; server_name wiki.aniu.so; location /confluence { client_max_body_size 100m; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:8090/confluence; location /synchrony { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:8091/synchrony; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; # 配置完成重启confluence和nginx,然后通过域名:http://wiki.aniu.so/confluence 访问confluence 从上图可以看出,通过域名nginx代理confluence已经成功,这里设置为中文继续安装。 选择产品安装并点击下一步,继续安装 这里由于没有插件授权先不勾选,点击下一步 通过上图可以看出需要输入授权码,下面介绍破解授权码。 破解confluence 复制上述截图中的Server ID(BEBV-EVUW-VSN5-KJMK),然后关闭confluence,使用如下命令: http://www.techlife.com.cn/?thread-2.htm 1、安装Confluence,需要KEY的时候从官网直接申请一个测试KEY 2、替换俩个文件,分别是 /opt/atlassian/confluence/confluence/WEB-INF/lib/atlassian-extras-decoder-v2-3.2.jar /opt/atlassian/confluence/confluence/WEB-INF/atlassian-bundled-plugins/atlassian-universal-plugin-manager-plugin-2.22.jar 替换前必须做备份,方便回退。 3、重启Confluence服务,正常使用产品。 选择外界数据库 连接数据库信息 使用mysql 选择空白站点继续安装: 由于我的jira是https的,导致confluence集成jira时除了问题,就使用confluence自己管理账户。 进入欢迎界面 到这里,confluence安装使用已经基本完成,然后开始破解。 替换文件, /opt/atlassian/confluence/confluence/WEB-INF/lib/atlassian-extras-decoder-v2-3.2.jar # 从百度云下载破解用的jar文件,然后重启confluence
配置nginx反向代理jira并实现https 配置Tomcat 在本文中,我们设置可以在地址http://jira.aniu.so/jira(标准HTTP端口80)上访问JIRA,而JIRA本身可以使用上下文路径/ jira监听端口8080。 修改配置文件server.xml(在jira安装目录下) <Context docBase="${catalina.home}/atlassian-jira" path="" reloadable="false" useHttpOnly="true"> <Context docBase="${catalina.home}/atlassian-jira" path="/jira" reloadable="false" useHttpOnly="true"> 配置连接器 添加proxyName和proxyPort元素(用适当的属性替换它们),以及下面的另一个连接器——这用于故障排除,以绕过代理: <!-- Nginx Proxy Connector --> # 仅仅使用nginx不使用https <Connector port="8080" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true" proxyName="jira.aniu.so" proxyPort="80"/> <!-- OPTIONAL,Nginx Proxy Connector with https --> # 本文使用这次方式 <Connector port="8081" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true" proxyName="jira.aniu.so" proxyPort="443" scheme="https" secure="true"/> <!-- Standard HTTP Connector --> <Connector port="8082" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true"/> 配置nginx 配置https,需要证书,使用CSR在线生成工具(https://ssl.sundns.com/tool/csrgenerator),生成csr和key文件,方便后面的使用 # 在线制作的csr和key上传到服务器/etc/pki/tls/certs目录下,如下: -rw-r--r-- 1 root root 1050 Jul 25 20:26 jira.aniu.so.csr -rw-r--r-- 1 root root 1675 Jul 25 20:27 jira.aniu.so.key # 使用下面命令生成crt文件,如下: [root@sh-kvm-3-1 certs]# openssl x509 -in jira.aniu.so.csr -out jira.aniu.so.crt -req -signkey jira.aniu.so.key -days 3650 Signature ok subject=/C=CN/O=aniu/OU=DevOps/ST=Shanghai/L=Shanghai/CN=jira.aniu.so/emailAddress=yunwei@aniu.tv Getting Private key 更新Nginx设置以拥有以下服务器(以FQDN和jira-hostname替换jira.aniu.so,并使用服务器的主机名): # cat jira.aniu.so.conf (nginx使用yum安装) server { listen 80; server_name jira.aniu.so; return 301 https://$host$request_uri; server { listen 443 ssl; server_name jenkins.aniu.so; access_log /var/log/nginx/jira.aniu.so.access.log main; error_log /var/log/nginx/jira.aniu.so.error.log; ssl on; ssl_certificate /etc/pki/tls/certs/jira.aniu.so.crt; ssl_certificate_key /etc/pki/tls/certs/jira.aniu.so.key; location /jira { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect http:// https://; proxy_pass http://sh-kvm-3-1:8080/jira; # sh-kvm-3-1这里为jira所在服务器的主机名 client_max_body_size 10M; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; # Required for new HTTP-based CLI proxy_http_version 1.1; proxy_request_buffering off; # 更改完成之后重启jira和nginx,访问https://jira.aniu.so/jira,能看到jira集成nginx已经成功 https://confluence.atlassian.com/jirakb/integrating-jira-with-nginx-426115340.html
CentOS6 安装并破解Jira 7 JIRA软件是为您的软件团队的每个成员构建的,用来规划,跟踪和发布优秀的软件。 https://confluence.atlassian.com/adminjiraserver074/installing-jira-applications-on-linux-881683168.html 最低硬件要求及软件安装 最小硬件依赖 CPU: Quad core 2GHz+ CPU RAM: 6GB Minimum database space: 10GB 更新系统,安装java环境 # 注意:jira需要oracle的java,默认的openjdk是不行的 # http://www.oracle.com/technetwork/java/javase/downloads/index.html,下载jdk-8u131-linux-x64.rpm,然后上传到/usr/local/src yum localinstall jdk-8u131-linux-x64.rpm -y # 查看jdk是否安装成功 # java -version java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) 安装mysql5.6,并创建jira数据库及jira用户,后面安装时会用到 注意:jira是支持5.7的,但是Confluence不支持5.7,所以这里安装mysql5.下载mysql的yum包 https://dev.mysql.com/downloads/ 安装 # 服务器配置mysql repo源,https://dev.mysql.com/downloads/repo/yum/,下载mysql57-community-release-el6-11.noarch.rpm然后上传到/usr/local/src # 默认启用的是5.7,更改为5.6 [mysql56-community] name=MySQL 5.6 Community Server baseurl=http://repo.mysql.com/yum/mysql-5.6-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql [mysql57-community] name=MySQL 5.7 Community Server baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/7/$basearch/ enabled=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql ----------------- # 安装mysql yum clean all && yum install mysql-community-server -y # 启动mysql并设置自启 # /etc/init.d/mysqld start Initializing MySQL database: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h sh-kvm-3-1 password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation --defaults-file argument to mysqld_safe when starting the server [ OK ] Starting mysqld: [ OK ] # 初始化mysql并重置密码 /usr/bin/mysql_secure_installation # 创建jira数据库和jira用户 mysql -uroot -p'211212' -e "create database jira;grant all on jira.* to 'jira'@'%' identified by 'jirapasswd';" # 测试jira连接mysql mysql -ujira -pjirapasswd # 连接成功 安装jira JIRA下载地址:https://www.atlassian.com/software/jira/download,下载,然后上传到/usr/local/src wget https://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-software-7.4.1-x64.bin cd /usr/local/src chmod a+x atlassian-jira-software-7.4.1-x64.bin sudo ./atlassian-jira-software-7.4.1-x64.bin # 使用默认安装,安装完成会启动jira 关闭已启动的jira,然后把破解包里面的atlassian-extras-3.2.jar和mysql-connector-java-5.1.42-bin.jar两个文件复制到/opt/atlassian/jira/atlassian-jira/WEB-INF/lib/目录下 /opt/atlassian/jira/bin/stop-jira.sh # 停止jira /opt/atlassian/jira/bin/start-jira.sh # 启动jira 其中atlassian-extras-2.jar是用来替换原来的atlassian-extras-2.jar文件,用作破解jira系统的。 而mysql-connector-java-5.1.42-bin.jar是用来连接mysql数据库的驱动软件包 重新启动jira,访问ip:8080 安装成功并启动jira,通过浏览器访问 langguage可以选择语言,默认支持中文,选择自己安装,然后继续 配置域名访问http://jira.aniu.so/jira 注意:上图中的Mode中,我们在此使用的是Private模式,在这个模式下,用户的创建需要由管理员创建。而在Public模式下,用户是可以自己进行注册。 下面这个页面是需要我们输入jira的license,如下: 注意:上图中的Server ID:BC2Z-EHVP-ERV0-RQUY 因为我们没有正式的license,所以需要我们在jira官网注册一个账号,然后利用这个账号申请一个可以试用30天的license,如下: 注意:这个图中的Server ID就是我们上面刚刚截图的Server ID。 点击生成许可证 通过上图,我们可以很明显的看到试用license已经申请成功。下面开始创建管理员账户,点击Next(此过程较慢。需等待)如下: 设置管理员的页面忘记截图,这里可以忽略,稍后设置邮件通知,点击继续出现欢迎界面。选择中文继续: 创建一个新项目 选择开发方式 到此jira7.4.1软件的安装就已经基本快结束了,下面我们来介绍jira的破解 jira破解 破解jira,其实我们已经破解了,在上面章节我们复制atlassian-extras-3.2.jar到/opt/atlassian/jira/atlassian-jira/WEB-INF/lib/目录下时,再次启动jira时就已经破解了。 到这里,jira的安装和破解基本完成,等下放上破解jira的百度云链接,链接:http://pan.baidu.com/s/1i5kRZgT 密码:5d4g jira使用中相关问题,后续会写博文介绍。
参考:https://hexo.io/,博客用于记录自己的学习工作历程 参考以下步骤安装 1、搭建环境准备(包括node.js和git环境,gitHub账户的配置)2、安装 配置Hexo,配置将Hexo与github page结合起来3、怎样发布文章 主题 推荐 主题4、Net的简单配置 添加sitemap和feed插件5、添加404 公益页面 安装并配置环境 win10+Node.js+git+github Node.js下载地址:https://nodejs.org/en/download/ Git下载地址:https://git-scm.com/ Github 地址:https://github.com 安装node.js 和 git 步骤省略,按默认傻瓜式安装即可 注册github账号并创建一个以 github昵称.github.io 命名的仓库 根据图中,注册一个github账号,昵称自定义,然后创建一个新项目,名字为:github昵称.github.io 项目创建完成之后,本地生成ssh 私钥和公钥,用于连接github认证,使用上面下载的git,打开git bash ssh-keygen -t rsa -C "github注册邮箱(自定义)" -f .ssh/shaonbean # -f 输出以昵称命名的公钥和私钥,方便记忆 公钥生成之后加到github上,方便后面的使用,用户本地和github进行ssh通信 到这里github设置告一段落 安装配置hexo 注:hexo安装前提需安装node.js 和git hexo官网:https://hexo.io/ hexo官方文档:https://hexo.io/docs/ 文中以J盘为例,创建目录github并创建字目录(用于存放项目) vdevops@shaon MINGW64 /j/github/shaonbean # 注: 如果是linux环境下搭建的hexo博客,不建议使用root权限 下载安装hexo npm install -g hexo-cli # 等待片刻,执行hexo如下图表示安装成功 初始化博客 这里以shaonbean为博客目录,执行下面命令 hexo init shaonbean # 创始化项目 cd shaonbean npm install 测试本地建站是否成功,输入: hexo s INFO Start processing INFO Hexo is running at http://localhost:4000/. Press Ctrl+C to stop. # 出现上面两行,即表示本地建站成功 初始化博客以后,能看到下图: 博客根目录初始化完成之后进项自定义配置,这里用到_config.yml 自定义博客的相关信息 编辑_config.yml配置文件,进行修改,参考下面配置: title: itdevops subtitle: DevOps is everything description: From Zero to the DevOps author: shaonbean language: zh-CN timezone: Asia/Shanghai # language和timezone 有规范,注意格式 配置个人域名 url: http://vdevops.com deploy: type: git repo: https://github.com/shaonbean/shaonbean.github.io.git branch: master repo项是之前Github上创建好的仓库的地址 exec ssh-agent bash ssh-add MYKEY # 这里是针对本地设置多个github账号进行操作 本地生成两对密钥对,然后在~/.ssh/目录下新建config文件,参考下面填入: #————GitHub————— Host github HostName github.com User git PreferredAuthentications publickey IdentityFile ~/.ssh/id_rsa # github.io Host github.io HostName github.com User git PreferredAuthentications publickey IdentityFile ~/.ssh/itdevops 测试本地ssh连接github是否正常 ssh -T git@github ssh -T git@github.io # 笔者这里第二个账号没设置成功,临时使用的https方式进行的通信 使用https,github账号加密码的方式来进行hexo的部署。配置如下: deploy: type: git #repo: git@github.io:shaonbean/shaonbean.github.io.git repo: https://shaonbean:shaonbeanpassword@github.com/shaonbean/shaonbean.github.io.git branch: master message: devops 配置完成之后,现在可以进到设置的项目目录里面通过hexo部署到github 进到你的项目目录。命令行执行下面命令: hexo g # 本地生成数据库文件,目录等 hexo d # 部署到远程 新建一篇博客 hexo new post "devops" 然后通过电脑编辑器(atom)对文章进行编辑,编辑完成之后,再次运行上面的生成,部署命令 hexo g # 本地生成博客 hexo d # 发布到远程 hexo d -g #在部署前先生成 注: 安装git扩展 npm install hexo-deployer-git --save # 没安装插件可能报错:deloyer not found:git ssh key报错 Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. 后面笔者会专门写一篇添加ssh 密钥的文章 部署完成可以看到github上面shaonbean.github.io,已经存在文件,通过浏览器访问如下: 从上面可以看出我们已经成功部署到远程,并能够正常访问。 配置博客主题 选择NexT,star最多,原因不多说知乎主题推荐:https://www.zhihu.com/question/24422335 cd /j/github/shaonbean.github.io # 这里项目名可以自定义 git clone https://github.com/iissnan/hexo-theme-next themes/next 更换主题完成后,访问: http://blog.csdn.net/gdutxiaoxu/article/details/53576018 http://www.jeyzhang.com/hexo-github-blog-building.html https://www.zrj96.com/post-471.html 看都看啦,给了一毛钱再走呗。。。 欢迎扫码关注 推送DevOps最新资讯及技术文章## 添加EPEL源 rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm ## 定时任务设置 yum -y install cronie-noanacron # yum remove cronie-anacron -y ## 配置vim yum -y install vim-enhanced lrzsz #echo " alias vi='vim' " >> /etc/profile echo "alias vi='vim' " >> ~/.bashrc #source /etc/profile source ~/.bashrc ## 添加用户 useradd yunwei echo anwg123. | passwd --stdin yunwei 安装依赖包 [root@sh-kvm-1 ~]# yum -y install qemu-kvm libvirt python-virtinst bridge-utils [root@kvm-1 ~]# lsmod | grep kvm kvm_intel 54285 0 kvm 333172 1 kvm_intel [root@sh-kvm-1 ~]# /etc/rc.d/init.d/libvirtd start Starting libvirtd daemon: [ OK ] [root@sh-kvm-1 ~]# /etc/rc.d/init.d/messagebus start Starting system message bus: [ OK ] [root@sh-kvm-1 ~]# chkconfig libvirtd on [root@sh-kvm-1 ~]# chkconfig messagebus on 配置桥接网络 # 网桥网卡配置 [root@sh-kvm-1 ~]# cp /etc/sysconfig/network-scripts/ifcfg-em1 /etc/sysconfig/network-scripts/ifcfg-br0 [root@sh-kvm-1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 HWADDR=14:18:77:40:29:D3 TYPE=Bridge UUID=9e8e7f89-cfe9-40c6-b547-a08ee6da0864 ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=192.168.1.125 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=114.114.114.114 # em1网卡配置 [root@sh-kvm-1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-em1 # create new DEVICE=em1 TYPE=Ethernet ONBOOT=yes BRIDGE=br0 [root@sh-kvm-1 ~]# /etc/rc.d/init.d/network restart 查看网桥配置状态 [root@sh-kvm-1 ~]# ifconfig br0 Link encap:Ethernet HWaddr 14:18:77:40:29:D3 inet addr:192.168.1.125 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::1618:77ff:fe40:29d3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:52655 errors:0 dropped:0 overruns:0 frame:0 TX packets:20216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:49670413 (47.3 MiB) TX bytes:1665453 (1.5 MiB) em1 Link encap:Ethernet HWaddr 14:18:77:40:29:D3 inet6 addr: fe80::1618:77ff:fe40:29d3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:302969 errors:0 dropped:0 overruns:0 frame:0 TX packets:96324 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:427674107 (407.8 MiB) TX bytes:7173701 (6.8 MiB) Interrupt:41 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:608 (608.0 b) TX bytes:608 (608.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:68:65:A2 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) vnet0 Link encap:Ethernet HWaddr FE:54:00:08:94:EC inet6 addr: fe80::fc54:ff:fe08:94ec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:19 errors:0 dropped:0 overruns:0 frame:0 TX packets:3443 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:1243 (1.2 KiB) TX bytes:381667 (372.7 KiB) 创建虚拟机kvm-1 # 创建挂载卷 lvcreate -n kvm-1 -L 20G vg_shkvm1 # 安装虚拟机 virt-install \ --name kvm-1 \ --ram 2048 \ --disk path=/dev/vg_shkvm1/kvm-1 \ --vcpus 2 \ --os-type linux \ --os-variant rhel6 \ --network bridge=br0 \ --graphics none \ --console pty,target_type=serial \ --location 'http://mirrors.aliyun.com/centos/6.9/os/x86_64/' \ --extra-args 'console=ttyS0,115200n8 serial' 图形安装教程 选择安装语言 设置网络配置,使用静态IP 配置静态ip,忘记截图,按照上面网桥ip,设置相同局域网ip即可 静态ip配置成功,如下图会加载安装镜像: 选择使用文本方式安装,即命令行模式 Re-initialize all 初始化磁盘 这里选择初始化全部硬盘,还有一种情况是如果在重装虚拟机的时候,当前lvm卷上面已经存在系统,可以选择替换当前系统的方式安装,这样会保留原来lvm卷上系统的完整信息。 选择时区,上海 设置root密码 安装系统安装位置 初始化磁盘 开始安装系统包文件 等待系统安装包安装完成,重启系统。 参考虚拟机kvm-1的安装,安装kvm-2 安装过程中报错解决 配置桥接时报错:can't create bridge with the same name,#本次安装故障原因是br0网卡配置是name没有改,导致重启时重启创建em1报错 # 使用brctl 解决 [root@sh-kvm-1 ~]# brctl Usage: brctl [commands] commands: addbr <bridge> add bridge delbr <bridge> delete bridge addif <bridge> <device> add interface to bridge delif <bridge> <device> delete interface from bridge setageing <bridge> <time> set ageing time setbridgeprio <bridge> <prio> set bridge priority setfd <bridge> <time> set bridge forward delay sethello <bridge> <time> set hello time setmaxage <bridge> <time> set max message age sethashel <bridge> <int> set hash elasticity sethashmax <bridge> <int> set hash max setmclmc <bridge> <int> set multicast last member count setmcrouter <bridge> <int> set multicast router setmcsnoop <bridge> <int> set multicast snooping setmcsqc <bridge> <int> set multicast startup query count setmclmi <bridge> <time> set multicast last member interval setmcmi <bridge> <time> set multicast membership interval setmcqpi <bridge> <time> set multicast querier interval setmcqi <bridge> <time> set multicast query interval setmcqri <bridge> <time> set multicast query response interval setmcqri <bridge> <time> set multicast startup query interval setpathcost <bridge> <port> <cost> set path cost setportprio <bridge> <port> <prio> set port priority setportmcrouter <bridge> <port> <int> set port multicast router show [ <bridge> ] show a list of bridges showmacs <bridge> show a list of mac addrs showstp <bridge> show bridge stp info stp <bridge> {on|off} turn stp on/off # 查看当前网桥配置 [root@sh-kvm-1 ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.1418774029d3 no em1 vnet0 virbr0 8000.5254006865a2 yes virbr0-nic # 删除刚刚重启网络时创建的网桥 [root@sh-kvm-1 ~]# brctl delbr br0 # 修改正确的网桥br0配置,然后重启网络成功,因此配置网桥的时候特别注意 https://www.server-world.info/en/note?os=CentOS_6&p=kvm&f=2
CentOS6 mininal 安装CouchDB2 详细版 couchdb官网: http://couchdb.apache.org/ - Erlang OTP (>=R61B03, =<19.x) - ICU - OpenSSL - Mozilla SpiderMonkey (1.8.5) - GNU Make - GNU Compiler Collection - libcurl - help2man - Python (>=2.7) for docs - Python Sphinx (>=1.1.3) 参考教程:http://docs.couchdb.org/en/2.0.0/install/unix.html # 初始设置,避免不必要的权限问题 /etc/init.d/iptables stop setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 安装依赖 yum -y update yum -y groupinstall "Development Tools" "Development Libraries" rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm yum install autoconf automake curl-devel help2man libicu-devel libtool perl-Test-Harness wget libicu-devel curl-devel ncurses-devel libtool libxslt fop java-1.7.0-openjdk java-1.7.0-openjdk-devel unixODBC unixODBC-devel vim openssl-devel 源码安装erlang yum install erlang-asn1 erlang-erts erlang-eunit erlang erlang-os_mon erlang-xmerl wget http://erlang.org/download/otp_src_19.3.tar.gz #满足依赖的最新版erlang tar -xvf otp_src_19.3.tar.gz cd otp_src_19.3 ./configure && make make install 源码安装 js-devel js-devel-1.8.5 # 无yum安装包 wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz cd js-1.8.5/js/src ./configure && make sudo make install 安装autoconf-archive 配置puias-computational.repo 安装autoconf-arch vim /etc/yum.repos.d/puias-computational.repo [PUIAS_6_computational] name=PUIAS computational Base $releasever - $basearch mirrorlist=http://puias.math.ias.edu/data/puias/computational/$releasever/$basearch/mirrorlist gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puias Install autoconf-archive rpm package: yum install autoconf-archive -y 源码安装CouchDB wget http://mirror.bit.edu.cn/apache/couchdb/source/2.0.0/apache-couchdb-2.0.0.tar.gz tar zxvf apache-couchdb-2.0.0.tar.gz cd apache-couchdb-2.0.0 ./configure make release # 这里有报错,根据解决方法修改完成之后重新make release,在文章末尾 添加用户启动couchdb # groupadd CouchDB Administrator # adduser --system --no-create-home --shell /bin/bash --group --gecos "CouchDB Administrator" couchdb # 默认CouchDB Administrator不存在,官网命令有点坑 # - adduser: group '--gecos' does not exist adduser --system --no-create-home --shell /bin/bash -c "CouchDB Administrator" couchdb # 使用此条命令 mv /usr/local/src/apache-couchdb-2.0.0/rel/couchdb /usr/local/ chown -R couchdb:couchdb /usr/local/couchdb # find /usr/local/couchdb -type d -exec chmod 0770 {} \; # chmod 0644 /usr/local/couchdb/etc/* 配置couchdb,特别重要 vim /usr/local/couchdb/etc/vm.args -name couchdb@n1couchdb.aniu.so > 注意:前提时设置系统需要设置hostname,修改完成系统hosts文件为 0.0.0.0 localhost localhost.localdomain n1couchdb.aniu.so #0.0.0.0 localhost localhost.localdomain n1couchdb.aniu.so 192.168.0.154 n1couchdb.aniu.so hostname n1couchdb.aniu.so sed -i 's/localhost.localdomain/n1couchdb.aniu.so/g' /etc/sysconfig/network > 上面几步操作是修改hostname,方便识别,为后面配置couchdb集群方便 # -kernel inet_dist_listen_min 9100 # -kernel inet_dist_listen_max 9200 > 上面两个参数暂时不用,配置集群的时候在使用 # 修改couchdb启动时默认监听的ip,默认127.0.0.1,不能通过浏览器进行初始化设置,改为0.0.0.0 sed -i 's/127.0.0.1/0.0.0.0/g' /usr/local/couchdb/etc/default.ini 配置完成之后使用couchdb用户启动couchdb su - couchdb cd /usr/local/couchdb ./bin/couchdb 启动成功界面如下: [info] 2017-07-04T13:09:39.587046Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_log started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.593768Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application folsom started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.649564Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_stats started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.649666Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application khash started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.662118Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_event started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.670377Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application ibrowse started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.678054Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application ioq started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.678117Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application mochiweb started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.678238Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application oauth started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.689266Z couchdb@n1couchdb.aniu.so <0.210.0> -------- Apache CouchDB 2.0.0 is starting. [info] 2017-07-04T13:09:39.689396Z couchdb@n1couchdb.aniu.so <0.211.0> -------- Starting couch_sup [info] 2017-07-04T13:09:39.937994Z couchdb@n1couchdb.aniu.so <0.210.0> -------- Apache CouchDB has started. Time to relax. [info] 2017-07-04T13:09:39.938230Z couchdb@n1couchdb.aniu.so <0.210.0> -------- Apache CouchDB has started on http://0.0.0.0:5986/ [info] 2017-07-04T13:09:39.938366Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.938520Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application ets_lru started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:39.953625Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application rexi started on node 'couchdb@n1couchdb.aniu.so' [error] 2017-07-04T13:09:40.065167Z couchdb@n1couchdb.aniu.so <0.293.0> -------- ** System running to use fully qualified hostnames ** ** Hostname localhost is illegal ** [info] 2017-07-04T13:09:40.099794Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application mem3 started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.099886Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application fabric started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.126321Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application chttpd started on node 'couchdb@n1couchdb.aniu.so' [notice] 2017-07-04T13:09:40.145151Z couchdb@n1couchdb.aniu.so <0.328.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:327) <= mem3_shards:load_shards_from_disk/1(line:315) <= mem3_shards:load_shards_from_disk/2(line:331) <= mem3_shards:for_docid/3(line:87) <= fabric_doc_open:go/3(line:38) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:187) <= chttpd_auth_cache:listen_for_changes/1(line:134) [error] 2017-07-04T13:09:40.145263Z couchdb@n1couchdb.aniu.so emulator -------- Error in process <0.329.0> on node 'couchdb@n1couchdb.aniu.so' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,327}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,315}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,331}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,87}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,38}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,187}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,134}]}]} [info] 2017-07-04T13:09:40.151849Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_index started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.151985Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_mrview started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.152078Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_plugins started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.193218Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_replicator started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.193271Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application couch_peruser started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.205124Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application ddoc_cache started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.225182Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application global_changes started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.225319Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application jiffy started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.233555Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application mango started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.241861Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application setup started on node 'couchdb@n1couchdb.aniu.so' [info] 2017-07-04T13:09:40.241950Z couchdb@n1couchdb.aniu.so <0.9.0> -------- Application snappy started on node 'couchdb@n1couchdb.aniu.so' [notice] 2017-07-04T13:09:45.145647Z couchdb@n1couchdb.aniu.so <0.328.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:327) <= mem3_shards:load_shards_from_disk/1(line:315) <= mem3_shards:load_shards_from_disk/2(line:331) <= mem3_shards:for_docid/3(line:87) <= fabric_doc_open:go/3(line:38) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:187) <= chttpd_auth_cache:listen_for_changes/1(line:134) [error] 2017-07-04T13:09:45.145807Z couchdb@n1couchdb.aniu.so emulator -------- Error in process <0.455.0> on node 'couchdb@n1couchdb.aniu.so' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,327}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,315}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,331}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,87}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,38}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,187}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,134}]}]} 查看couchdb进程 [root@n1couchdb ~]# ps -ef | grep couchdb couchdb 3582 1 0 20:59 ? 00:00:00 /usr/local/couchdb/bin/../erts-8.3/bin/epmd -daemon root 3804 3789 0 21:06 pts/2 00:00:00 su - couchdb couchdb 3805 3804 0 21:06 pts/2 00:00:00 -bash couchdb 3901 3805 3 21:09 pts/2 00:00:04 /usr/local/couchdb/bin/../erts-8.3/bin/beam.smp -K true -A 16 -Bd -- -root /usr/local/couchdb/bin/.. -progname couchdb -- -home /home/couchdb -- -boot /usr/local/couchdb/bin/../releases/2.0.0/couchdb -name couchdb@n1couchdb.aniu.so -setcookie monster -kernel error_logger silent -sasl sasl_error_logger false -noshell -noinput -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200 -config /usr/local/couchdb/bin/../releases/2.0.0/sys.config couchdb 3928 3901 0 21:09 ? 00:00:00 erl_child_setup 1024 couchdb 3934 3928 0 21:09 ? 00:00:00 sh -s disksup couchdb 3936 3928 0 21:09 ? 00:00:00 /usr/local/couchdb/bin/../lib/os_mon-2.4.2/priv/bin/memsup couchdb 3937 3928 0 21:09 ? 00:00:00 /usr/local/couchdb/bin/../lib/os_mon-2.4.2/priv/bin/cpu_sup couchdb 3938 3928 0 21:09 ? 00:00:00 inet_gethost 4 couchdb 3939 3938 0 21:09 ? 00:00:00 inet_gethost 4 root 3961 3945 0 21:12 pts/3 00:00:00 grep couchdb [root@n1couchdb ~]# netstat -nlpt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5984 0.0.0.0:* LISTEN 4355/beam.smp tcp 0 0 0.0.0.0:5986 0.0.0.0:* LISTEN 4355/beam.smp tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 3582/epmd 检查couchdb是否正常工作 [root@n1couchdb ~]# curl -I http://0.0.0.0:5984/_utils/index.html HTTP/1.1 200 OK Cache-Control: private, must-revalidate Content-Length: 1886 Content-Security-Policy: default-src 'self'; img-src 'self' data:; font-src 'self'; script-src 'self' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; Content-Type: text/html Date: Tue, 04 Jul 2017 13:26:40 GMT last-modified: Tue, 04 Jul 2017 12:43:17 GMT Server: CouchDB/2.0.0 (Erlang OTP/19) 单点情况下通过浏览器访问 http://192.168.0.154:5984/_utils/#verifyinstall,进行初始化设置,如下图: 这里初始设置 username: admin password: password ,方便记忆,后面需要再改 登录成功,配置单节点 CouchDB管理页面还有许多操作,这里就不过多演示 安装过程中报错修复 ERROR: compile failed while processing /usr/local/src/apache-couchdb-2.0.0/src/couch: rebar_abort 解决报错: cd /usr/local/src/apache-couchdb-2.0.0 egrep -r js-1.8.5 * vim +106 src/couch/rebar.config.script {"linux", CouchJSPath, CouchJSSrc, [{env, [{"CFLAGS", JS_CFLAGS ++ " -DXP_UNIX -I/usr/include/js"}, {"LDFLAGS", JS_LDFLAGS ++ " -lm"}]}]}, {"linux", CouchJSPath, CouchJSSrc, [{env, [{"CFLAGS", JS_CFLAGS ++ " -DXP_UNIX -I/usr/local/include/js"}, {"LDFLAGS", JS_LDFLAGS ++ " -lm"}]}]}, # 根本原因就是couchdb编译的时候找到默认的js # 还有种方式就是做软链接 ln -s /usr/local/include/js /usr/include/j # 这种方法尚未尝试,修改完成就可以继续编译啦 安装依赖缺失报错 [root@localhost apache-couchdb-2.0.0]# make release Uncaught error in rebar_core: {'EXIT', {undef, [{crypto,start,[],[]}, {rebar,run_aux,2, [{file,"src/rebar.erl"},{line,212}]}, {rebar,main,1, [{file,"src/rebar.erl"},{line,58}]}, {escript,run,2, [{file,"escript.erl"},{line,760}]}, {escript,start,1, [{file,"escript.erl"},{line,277}]}, {init,start_em,1,[]}, {init,do_boot,3,[]}]}} make: *** [couch] Error 1 次报错是编译erlang前没安装openssl-devel,安装openssl-devel重新编译erlang WARN: 'generate' command does not apply to directory /usr/local/src/apache-couchdb-2.0.0 ... done You can now copy the rel/couchdb directory anywhere on your system. Start CouchDB with ./bin/couchdb from within that directory. 下面是程序本身BUG [notice] 2017-07-04T13:18:55.255565Z couchdb@n1couchdb.aniu.so <0.328.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:327) <= mem3_shards:load_shards_from_disk/1(line:315) <= mem3_shards:load_shards_from_disk/2(line:331) <= mem3_shards:for_docid/3(line:87) <= fabric_doc_open:go/3(line:38) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:187) <= chttpd_auth_cache:listen_for_changes/1(line:134) [error] 2017-07-04T13:18:55.255823Z couchdb@n1couchdb.aniu.so emulator -------- Error in process <0.9372.0> on node 'couchdb@n1couchdb.aniu.so' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,327}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,315}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,331}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,87}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,38}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,187}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,134}]}]} 作为单个节点运行2.0时,它不会在启动时创建系统数据库,必须手动执行此操作: curl -X PUT http://0.0.0.0:5984/_users curl -X PUT http://0.0.0.0:5984/_replicator curl -X PUT http://0.0.0.0:5984/_global_changes http://guide.couchdb.org/draft/security.html http://docs.couchdb.org/en/latest/install/setup.html https://medium.com/linagora-engineering/setting-up-a-couchdb-2-cluster-on-centos-7-8cbf32ae619f http://docs.couchdb.org/en/2.0.0/install/unix.html https://issues.apache.org/jira/browse/COUCHDB-2995 # 最重要报错修复 http://docs.couchdb.org/en/2.0.0/cluster/setup.html#the-cluster-setup-wizard
Alibaba JStorm 是一个强大的企业级流式计算引擎,是Apache Storm 的4倍性能, 可以自由切换行模式或mini-batch 模式,JStorm 不仅提供一个流式计算引擎, 还提供实时计算的完整解决方案, 涉及到更多的组件, 如jstorm-on-yarn, jstorm-on-docker, SQL Engine, Exactly-Once Framework 等等。 JStorm 是一个分布式实时计算引擎 JStorm 是一个类似Hadoop MapReduce的系统, 用户按照指定的接口实现一个任务,然后将这个任务递交给JStorm系统,JStorm将这个任务跑起来,并且按7 * 24小时运行起来,一旦中间一个Worker 发生意外故障, 调度器立即分配一个新的Worker替换这个失效的Worker。 因此,从应用的角度,JStorm应用是一种遵守某种编程规范的分布式应用。从系统角度, JStorm是一套类似MapReduce的调度系统。 从数据的角度,JStorm是一套基于流水线的消息处理机制。 实时计算现在是大数据领域中最火爆的一个方向,因为人们对数据的要求越来越高,实时性要求也越来越快,传统的Hadoop MapReduce,逐渐满足不了需求,因此在这个领域需求不断。 Storm组件和Hadoop组件对比 JStorm Hadoop Nimbus JobTracker Supervisor TaskTracker Worker Child Topology Spout/Bolt Mapper/Reducer 在Storm和JStorm出现以前,市面上出现很多实时计算引擎,但自Storm和JStorm出现后,基本上可以说一统江湖: 究其优点: 开发非常迅速:接口简单,容易上手,只要遵守Topology、Spout和Bolt的编程规范即可开发出一个扩展性极好的应用,底层RPC、Worker之间冗余,数据分流之类的动作完全不用考虑 扩展性极好:当一级处理单元速度,直接配置一下并发数,即可线性扩展性能 健壮强:当Worker失效或机器出现故障时, 自动分配新的Worker替换失效Worker 数据准确性:可以采用Ack机制,保证数据不丢失。 如果对精度有更多一步要求,采用事务机制,保证数据准确。 实时性高: JStorm 的设计偏向单行记录,因此,在时延较同类产品更低 JStorm处理数据的方式是基于消息的流水线处理, 因此特别适合无状态计算,也就是计算单元的依赖的数据全部在接受的消息中可以找到, 并且最好一个数据流不依赖另外一个数据流。 因此,常常用于: 日志分析,从日志中分析出特定的数据,并将分析的结果存入外部存储器如数据库。目前,主流日志分析技术就使用JStorm或Storm 管道系统, 将一个数据从一个系统传输到另外一个系统, 比如将数据库同步到Hadoop 消息转化器, 将接受到的消息按照某种格式进行转化,存储到另外一个系统如消息中间件 统计分析器, 从日志或消息中,提炼出某个字段,然后做count或sum计算,最后将统计值存入外部存储器。中间处理过程可能更复杂。 实时推荐系统, 将推荐算法运行在jstorm中,达到秒级的推荐效果 首先,JStorm有点类似于Hadoop的MR(Map-Reduce),但是区别在于,hadoop的MR,提交到hadoop的MR job,执行完就结束了,进程就退出了,而一个JStorm任务(JStorm中称为topology),是7*24小时永远在运行的,除非用户主动kill。 JStorm组件 接下来是一张比较经典的Storm的大致的结构图(跟JStorm一样): 图中的水龙头(好吧,有点俗)就被称作spout,闪电被称作bolt。 在JStorm的topology中,有两种组件:spout和bolt。 # spout spout代表输入的数据源,这个数据源可以是任意的,比如说kafaka,DB,HBase,甚至是HDFS等,JStorm从这个数据源中不断地读取数据,然后发送到下游的bolt中进行处理。 # bolt bolt代表处理逻辑,bolt收到消息之后,对消息做处理(即执行用户的业务逻辑),处理完以后,既可以将处理后的消息继续发送到下游的bolt,这样会形成一个处理流水线(pipeline,不过更精确的应该是个有向图);也可以直接结束。 通常一个流水线的最后一个bolt,会做一些数据的存储工作,比如将实时计算出来的数据写入DB、HBase等,以供前台业务进行查询和展现。 组件的接口 JStorm框架对spout组件定义了一个接口:nextTuple,顾名思义,就是获取下一条消息。执行时,可以理解成JStorm框架会不停地调这个接口,以从数据源拉取数据并往bolt发送数据。 同时,bolt组件定义了一个接口:execute,这个接口就是用户用来处理业务逻辑的地方。 每一个topology,既可以有多个spout,代表同时从多个数据源接收消息,也可以多个bolt,来执行不同的业务逻辑。 调度和执行 接下来就是topology的调度和执行原理,对一个topology,JStorm最终会调度成一个或多个worker,每个worker即为一个真正的操作系统执行进程,分布到一个集群的一台或者多台机器上并行执行。 而每个worker中,又可以有多个task,分别代表一个执行线程。每个task就是上面提到的组件(component)的实现,要么是spout要么是bolt。 用户在提交一个topology的时候,会指定以下的一些执行参数: #总worker数 即总的进程数。举例来说,我提交一个topology,指定worker数为3,那么最后可能会有3个进程在执行。之所以是可能,是因为根据配置,JStorm有可能会添加内部的组件,如_acker或者__topology_master(这两个组件都是特殊的bolt),这样会导致最终执行的进程数大于用户指定的进程数。我们默认是如果用户设置的worker数小于10个,那么__topology_master 只是作为一个task存在,不独占worker;如果用户设置的worker数量大于等于10个,那么__topology_master作为一个task将独占一个worker #每个component的并行度 上面提到每个topology都可以包含多个spout和bolt,而每个spout和bolt都可以单独指定一个并行度(parallelism),代表同时有多少个线程(task)来执行这个spout或bolt。 JStorm中,每一个执行线程都有一个task id,它从1开始递增,每一个component中的task id是连续的。 还是上面这个topology,它包含一个spout和一个bolt,spout的并行度为5,bolt并行度为10。那么我们最终会有15个线程来执行:5个spout执行线程,10个bolt执行线程。 这时spout的task id可能是1~5,bolt的task id可能是6~15,之所以是可能,是因为JStorm在调度的时候,并不保证task id一定是从spout开始,然后到bolt的。但是同一个component中的task id一定是连续的。 #每个component之间的关系 即用户需要去指定一个特定的spout发出的数据应该由哪些bolt来处理,或者说一个中间的bolt,它发出的数据应该被下游哪些bolt处理。 还是以上面的topology为例,它们会分布在3个进程中。JStorm使用了一种均匀的调度算法,因此在执行的时候,你会看到,每个进程分别都各有5个线程在执行。当然,由于spout是5个线程,不能均匀地分配到3个进程中,会出现一个进程只有1个spout线程的情况;同样地,也会出现一个进程中有4个bolt线程的情况。 在一个topology的运行过程中,如果一个进程(worker)挂掉了,JStorm检测到之后,会不断尝试重启这个进程,这就是7*24小时不间断执行的概念。 消息的通信 上面提到,spout的消息会发送给特定的bolt,bolt也可以发送给其他的bolt,那这之间是如何通信的呢? 首先,从spout发送消息的时候,JStorm会计算出消息要发送的目标task id列表,然后看目标task id是在本进程中,还是其他进程中,如果是本进程中,那么就可以直接走进程内部通信(如直接将这个消息放入本进程中目标task的执行队列中);如果是跨进程,那么JStorm会使用netty来将消息发送到目标task中。 实时计算结果输出 JStorm是7*24小时运行的,外部系统如果需要查询某个特定时间点的处理结果,并不会直接请求JStorm(当然,DRPC可以支持这种需求,但是性能并不是太好)。一般来说,在JStorm的spout或bolt中,都会有一个定时往外部存储写计算结果的逻辑,这样数据可以按照业务需求被实时或者近实时地存储起来,然后直接查询外部存储中的计算结果即可。 以上内容直接粘贴JStorm官网,切勿吐槽 二、 Jstorm 集群安装 1、系统环境准备 # OS: CentOS 6.8 mininal # host.ip: 10.1.1.78 aniutv-1 # host.ip: 10.1.1.80 aniutv-2 # host.ip: 10.1.1.97 aniutv-5 2、安装目录自定义 # jstorm : /opt/jstorm (源码安装), zookeeper : /opt/zookeeper(源码安装) , java : /usr/java/jdk1.7.0_79 (rpm包安装) 3、zookeeper 集群安装 zookeeper 集群参考(http://blog.csdn.net/wh211212/article/details/56014983) 4、zeromq 安装 zeromq下载地址:http://zeromq.org/area:download/ 下载zeromq-4.2.1.tar.gz 到/usr/local/src cd /usr/local/src && tar -zxf zeromq-4.2.1.tar.gz -C /opt cd /opt/zeromq-4.2.1 && ./configure && make && sudo make install && sudo ldconfig 5、jzmq安装 cd /opt && git clone https://github.com/nathanmarz/jzmq.git ./autogen.sh && ./configure && make && make install 6、JStorm安装 wget https://github.com/alibaba/jstorm/releases/download/2.1.1/jstorm-2.1.1.zip -P /usr/local/src cd /usr/local/src && unzip jstorm-2.1.1.zip -d /opt cd /opt && mv jstorm-2.1.1 jstorm mkdir /opt/jstorm/jstorm_data echo '# jstorm env' >> ~/.bashrc echo 'export JSTORM_HOME=/opt/jstorm' >> ~/.bashrc echo 'export PATH=$PATH:$JSTORM_HOME/bin' >> ~/.bashrc source ~/.bashrc # JStorm 配置 sed -i /'storm.zookeeper.servers:/a\ - "10.1.1.78"' /opt/jstorm/conf/storm.yaml sed -i /'storm.zookeeper.servers:/a\ - "10.1.1.80"' /opt/jstorm/conf/storm.yaml sed -i /'storm.zookeeper.servers:/a\ - "10.1.1.97"' /opt/jstorm/conf/storm.yaml sed -i /'storm.zookeeper.root/a\ nimbus.host: "10.1.1.78"' /opt/jstorm/conf/storm.yaml<> storm.zookeeper.servers: 表示zookeeper 的地址, nimbus.host: 表示nimbus的地址 storm.zookeeper.root: 表示JStorm在zookeeper中的根目录,当多个JStorm共享一个zookeeper时,需要设置该选项,默认即为“/jstorm” storm.local.dir: 表示JStorm临时数据存放目录,需要保证JStorm程序对该目录有写权限 java.library.path: Zeromq 和java zeromq library的安装目录,默认"/usr/local/lib:/opt/local/lib:/usr/lib" supervisor.slots.ports: 表示Supervisor 提供的端口Slot列表,注意不要和其他端口发生冲突,默认是68xx,而Storm的是67xx topology.enable.classloader: false, 默认关闭classloader,如果应用的jar与JStorm的依赖的jar发生冲突,比如应用使用thrift9,但jstorm使用thrift7时,就需要打开classloader。建议在集群级别上默认关闭,在具体需要隔离的topology上打开这个选项。 # 下面命令只需要在安装 jstorm_ui 和提交jar节点的机器上面执行即可 mkdir ~/.jstorm cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm 7、安装JStorm Web UI 强制使用tomcat7.0或以上版本,切记拷贝~/.jstorm/storm.yaml, Web UI 可以和Nimbus在同一个节点上 mkdir ~/.jstorm cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm 下载tomcat 7.x (以apache-tomcat-7.0.37 为例) tar -xzf apache-tomcat-7.0.75.tar.gz cd apache-tomcat-7.0.75 cd webapps cp $JSTORM_HOME/jstorm-ui-2.1.1.war ./ mv ROOT ROOT.old ln -s jstorm-ui-2.1.1 ROOT # 另外不是 ln -s jstorm-ui-2.1.1.war ROOT 这个要小心 cd ../bin ./startup.sh 8、JStorm启动 在nimbus 节点(10.1.1.78)上执行 “nohup jstorm nimbus &”, 查看$JSTORM_HOME/logs/nimbus.log检查有无错误 在supervisor节点(10.1.1.78,10.1.1.80,10.1.1.97)上执行 “nohup jstorm supervisor &”, 查看$JSTORM_HOME/logs/supervisor.log检查有无错误 9、JStorm Web UI JStorm集群启动成功截图如下: # JStorm 集群安装问题总结 1、注意/etc/hosts设置,添加相对应的ip hostname 2、设置ssh免密操作(此步骤在zookeeper集群完成) 3、注意各服务的环境变量设置
运维的自动化一般需要经过四个阶段:手工操作->脚本自动化->WEB自动化->调度自动化,目前很多公司的运维同仁处于“脚本自动化”阶段,蓝鲸智云开放的社区版V1系列,就是为这个阶段的同仁准备的产品,可以帮助各位进入“WEB自动化”;当进入“WEB自动化”之后,开始向更高的阶段发展,因而推出了社区版V2系列,这个版本基于之前的版本,不仅提供了API,而且还推出了可以低成本构建运维工具自建运营系统的“蓝鲸智云集成平台”,直接让运维行业的同仁进入“调度自动化”阶段。 一、蓝鲸介绍 蓝鲸官网:http://bk.tencent.com/ 蓝鲸智云社区:http://bbs.bk.tencent.com/forum.php 二、蓝鲸安装准备 2.1、蓝鲸相关软件包及加密证书(内测版本需申请) 2.2、bkv2.0.1.tar.gz && ssl_certificates.tar.gz 2.3、相关安装需关注蓝鲸公众号获取最新版本及获取方式,生成证书参考社区教程 三、系统环境 Hostname IP Address OS version Hadoop role Node role aniutv-3 10.1.1.127 CentOS 6.8 aniutv-5 10.1.1.97 CentOS 6.8 App正式 passagent,rabbitmq aniutv-6 10.1.1.59 CentOS 6.8 App测试 passagent 3.1、参考链接 http://bbs.bk.tencent.com/forum.php?mod=viewthread&tid=167(安装前建议详细阅读官方社区图文教程) 3.2、蓝鲸基础模块安装 # 注:建议安装到/data,可以自定义其他目录如(/opt),然后上传所需安装包到服务下的/data目录 tar zxf bkv2.0.1.tar.gz # 解压蓝鲸的安装包 cp ssl_certificates.tar.gz bkv2.0.1/ # 拷贝证书 (证书下载在下方注释) cd bkv2.0.1/ vi bk.conf # 修改本机的配置 # 注,仔细阅读配置文件中PASSAGENT_TESTIP,PASSAGENT_PRODIP,强烈建议安装配置不要只安装单个模块,本人第一次安装由于只安装了pass,导致蓝鲸的很多功能都没有使用到,确认配置文件没问题之后,执行下面命令: ./bk.sh init paas # 启动一些服务,初始化环境 ./bk.sh install paas # 安装集成平台 安装完成,查看服务状态, # 蓝鲸pass平台安装完成之后,通过域名或者ip地址访问查看是否正常,正常如下,默认登录用户名密码:admin,blueking # 域名要在本地hosts指定 3.3、蓝鲸PassAgent_prod安装,即安装App正式,和Rabbitmq tar zxf bkv2.0.0.tar.gz # 解压蓝鲸的安装包 cp ssl_certificates.tar.gz bkv2.0.0/ # 拷贝证书 cd bkv2.0.0/ vi bk.conf # 修改本机的配置 ./bk.sh init paasagent # 初始化 ./bk.sh install paasagent # 安装App正式环境 ./bk.sh install rabbitmq # 后台任务(celery任务)的消息队列 # 注:配置文件很重要 3.4、蓝鲸PassAgent_test安装,即安装App测试环境 # 注:相关包可以使用scp从pass服务器拷贝过来 tar zxf bkv2.0.1.tar.gz # 解压蓝鲸的安装包 cp ssl_certificates.tar.gz bkv2.0.1/ # 拷贝证书 cd bkv2.0.1/ vi bk.conf # 修改本机的配置 ./bk.sh init paasagent # 初始化 ./bk.sh install paasagent # 安装App正式环境 # 注:仔细查看配置文件,确保配置文件正确 3.4、蓝鲸访问测试 # 使用管理员权限修改本地hosts,打开C:\Windows\System32\drivers\etc\hosts,添加一下内容: # tencent bk 10.1.1.127 cmdb.aniu.tv 10.1.1.127 job.aniu.tv 10.1.1.127 paas.aniu.tv 10.1.1.127 paasagentt.aniu.tv 10.1.1.127 paasagento.aniu.tv # 使用 浏览器访问http://paas.aniu.tv/,默认用户名admin,默认密码blueking 登录到工作台,访问开发者中心,查看服务器注册状态及信息如下: # 查看内置应用 # 查看第三方服务 # 默认初始内置应用未安装,需要手动安装,点击部署,会自动安装内置应用,全部安装完成,访问蓝鲸如下: # 通过平台自动安装agent 通过工作天,agent安装模块来自定部署agent到其他服务器上,建议使用自动部署的方式,填写需要安装agent的服务器地址,建议使用root安装agent,配置完成点击安装,蓝鲸会自动安装agent到你需要的服务器上,安装成功正常后会有数字显示,同时可以直接在pass平台上查看安装详情,非常方便。 四、蓝鲸安装总结 4.1、系统环境选择,建议选三天服务器,配置参考社区蓝鲸安装手册,里面有详细介绍。 4.2、最主要的是配置文件,搞清楚那台是PASSAGENT_ETST,和PASSAGENT_PROD即可,基础模块一般不会搞错。 4.3、假如,第一次安装错误,重装的时候,请把所有蓝鲸先关的服务都停掉,我第一次重装没成功就是因为有些服务没有停掉。 4.4、具体使用请参考蓝鲸社区,有详细的使用说明和配置说明,以及常见报错解决方法。
Apache HTTP Server(简称Apache)是Apache软件基金会的一个开放源代码的网页服务器软件,可以在大多数电脑操作系统中运行,由于其跨平台和安全性(尽管不断有新的漏洞被发现,但由于其开放源代码的特点,漏洞总能被很快修补。因此总合来说,其安全性还是相当高的。)。被广泛使用,是最流行的Web服务器软件之一。它快速、可靠并且可通过简单的API扩充,将Perl/Python等解释器编译到服务器中。 [root@linuxprobe ~]# yum -y install httpd # 删除默认欢迎页面 [root@linuxprobe ~]# rm -f /etc/httpd/conf.d/welcome.conf [2] 配置httpd,将服务器名称替换为您自己的环境 [root@linuxprobe ~]# vi /etc/httpd/conf/httpd.conf # line 86: 改变管理员的邮箱地址 ServerAdmin root@linuxprobe.org # line 95: 改变域名信息 ServerName www.linuxprobe.org:80 # line 151: none变成All AllowOverride All # line 164: 添加只能使用目录名称访问的文件名 DirectoryIndex index.html index.cgi index.php # add follows to the end # server's response header(安全性) ServerTokens Prod # keepalive is ON KeepAlive On [root@linuxprobe ~]# systemctl start httpd [root@linuxprobe ~]# systemctl enable httpd [3] 如果Firewalld正在运行,请允许HTTP服务。,HTTP使用80 / TCP [root@linuxprobe ~]# firewall-cmd --add-service=http --permanent success [root@linuxprobe ~]# firewall-cmd --reload success [4] 创建一个HTML测试页,并使用Web浏览器从客户端PC访问它。如果显示以下页面,是正确的 [root@linuxprobe ~]# vi /var/www/html/index.html <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: center;"> Welcome access LinuxProbe.org,This is Test Page! </div> </body> </html> 三、支持Perl 启用CGI执行并使用Perl脚本 [1] 安装Perl. [root@linuxprobe ~]# yum -y install perl perl-CGI [2] 默认情况下,在“/var/www/cgi-bin”目录下允许CGI。 可以使用Perl Scripts放在目录下。然而,它下面的所有文件都被处理为CGI。 # 下面的设置是CGI的设置 [root@linuxprobe ~]# grep -n "^ *ScriptAlias" /etc/httpd/conf/httpd.conf 247: ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" [3] 如果你想允许在其他目录中的CGI,配置如下。 例如,在“/var/www/html/cgi-enabled”中允许。 [root@linuxprobe ~]# vi /etc/httpd/conf.d/cgi-enabled.conf # create new # processes .cgi and .pl as CGI scripts <Directory "/var/www/html/cgi-enabled"> Options +ExecCGI AddHandler cgi-script .cgi .pl </Directory> [root@linuxprobe ~]# systemctl restart httpd [4] 如果SELinux被启用,并且允许CGI在不是像上面[3]的默认目录下,更改规则如下。 [root@linuxprobe ~]# chcon -R -t httpd_sys_script_exec_t /var/linuxprobe/html/cgi-enabled [root@linuxprobe ~]# semanage fcontext -a -t httpd_sys_script_exec_t /var/www/html/cgi-enabled [5] 创建一个CGI测试页面,并使用Web浏览器从客户端PC访问它。如果显示以下页面,说明配置正确。 [root@linuxprobe ~]# vi /var/www/html/cgi-enabled/index.cgi #!/usr/bin/perl print "Content-type: text/html\n\n"; print "<html>\n<body>\n"; print "<div style=\"width: 100%; font-size: 40px; font-weight: bold; text-align: center;\">\n"; print "CGI Test Page"; print "\n</div>\n"; print "</body>\n</html>\n"; [root@linuxprobe ~]# chmod 705 /var/www/html/cgi-enabled/index.cgi 四、支持PHP 配置httpd以使用PHP脚本 [1] 安装PHP. [root@linuxprobe ~]# yum -y install php php-mbstring php-pear [root@linuxprobe ~]# vi /etc/php.ini # line 878: 取消注释,设置时区 date.timezone = "Asia/Shanghai" [root@linuxprobe ~]# systemctl restart httpd [2] 创建一个PHP测试页面,并使用Web浏览器从客户端PC访问它。如果显示以下页面,它是确定。 [root@linuxprobe ~]# vi /var/www/html/index.php <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: center;"> <?php print Date("Y/m/d"); ?> </div> </body> </html> [2] 默认情况下,在“/var/www/cgi-bin”目录下允许CGI。 可以使用Perl Scripts放在目录下。然而,它下面的所有文件都被处理为CGI。 # 下面的设置是CGI的设置 [root@linuxprobe ~]# grep -n "^ *ScriptAlias" /etc/httpd/conf/httpd.conf 247: ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" [3] 如果你想允许在其他目录中的CGI,配置如下。 例如,在“/var/www/html/cgi-enabled”中允许。 [root@linuxprobe ~]# vi /etc/httpd/conf.d/cgi-enabled.conf # create new # processes .rb as CGI scripts <Directory "/var/www/html/cgi-enabled"> Options +ExecCGI AddHandler cgi-script .rb </Directory> [root@linuxprobe ~]# systemctl restart httpd [4] 如果SELinux被启用,并且允许CGI在不是像上面[3]的默认目录下,更改规则如下。 [root@linuxprobe ~]# chcon -R -t httpd_sys_script_exec_t /var/www/html/cgi-enabled [root@linuxprobe ~]# semanage fcontext -a -t httpd_sys_script_exec_t /var/www/html/cgi-enabled [5] Create a CGI test page and access to it from client PC with web browser. It's OK if following page is shown. [root@linuxprobe ~]# vi /var/www/html/cgi-enabled/index.rb #!/usr/bin/ruby print "Content-type: text/html\n\n" print "<html>\n<body>\n" print "<div style=\"width: 100%; font-size: 40px; font-weight: bold; text-align: center;\">\n" print "Ruby Script Test Page" print "\n</div>\n" print "</body>\n</html>\n" [root@linuxprobe ~]# chmod 705 /var/www/html/cgi-enabled/index.rb 六、支持Python 启用CGI执行并使用Python脚本 [2] 默认情况下,在“/var/www/cgi-bin”目录下允许CGI。 可以使用Perl Scripts放在目录下。然而,它下面的所有文件都被处理为CGI。 # 下面的设置是CGI的设置 [root@linuxprobe ~]# grep -n "^ *ScriptAlias" /etc/httpd/conf/httpd.conf 247: ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" [3] 如果你想允许在其他目录中的CGI,配置如下。 例如,在“/var/www/html/cgi-enabled”中允许。 [root@linuxprobe ~]# vi /etc/httpd/conf.d/cgi-enabled.conf # create new # processes .py as CGI scripts <Directory "/var/www/html/cgi-enabled"> Options +ExecCGI AddHandler cgi-script .py </Directory> [root@linuxprobe ~]# systemctl restart httpd [4] 如果SELinux被启用,并且允许CGI在不是像上面[3]的默认目录下,更改规则如下。 [root@linuxprobe ~]# chcon -R -t httpd_sys_script_exec_t /var/www/html/cgi-enabled [root@linuxprobe ~]# semanage fcontext -a -t httpd_sys_script_exec_t /var/www/html/cgi-enabled [5] Create a CGI test page and access to it from client PC with web browser. It's OK if following page is shown. [root@linuxprobe ~]# vi /var/www/html/cgi-enabled/index.py #!/usr/bin/env python print "Content-type: text/html\n\n" print "<html>\n<body>\n" print "<div style=\"width: 100%; font-size: 40px; font-weight: bold; text-align: center;\">\n" print "Python Script Test Page" print "\n</div>\n" print "</body>\n</html>\n" [root@linuxprobe ~]# chmod 705 /var/www/html/cgi-enabled/index.py 7、支持Userdir 启用userdir,用户可以使用此设置创建网站 [1] 配置 httpd. [root@linuxprobe ~]# vi /etc/httpd/conf.d/userdir.conf # line 17: comment out #UserDir disabled # line 24: uncomment UserDir public_html # line 31 - 35 <Directory "/home/*/public_html"> AllowOverride All # change Options None # change Require method GET POST OPTIONS </Directory> [root@linuxprobe ~]# systemctl restart httpd [2] 创建一个测试页,使用普通用户通过客户端PC与Web浏览器和访问它,如果显示以下页面,就是正确的 [cent@linuxprobe ~]$ mkdir public_html [cent@linuxprobe ~]$ chmod 711 /home/cent [cent@linuxprobe ~]$ chmod 755 /home/cent/public_html [cent@linuxprobe ~]$ vi ./public_html/index.html <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: center;"> UserDir Test Page </div> </body> </html> 浏览器访问:http://linuxprobe.org/~wang/,出现如下界面 8、设置虚拟主机 配置虚拟主机以使用多个域名。 以下示例在域名为[linuxprobe.org],虚拟域名为[virtual.host(根目录[/home/wang/public_html]]的环境中设置。 必须为此示例设置Userdir的设置 DocumentRoot /home/cent/public_html ServerName www.virtual.host ServerAdmin webmaster@virtual.host ErrorLog logs/virtual.host-error_log CustomLog logs/virtual.host-access_log combined </VirtualHost> [root@linuxprobe ~]# systemctl restart httpd [2]创建测试页并使用Web浏览器从客户端计算机访问它。如果显示以下页面,则是正确的: [cent@linuxprobe ~]$ vi ~/public_html/virtual.php <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: center;"> Virtual Host Test Page </div> </body> </html> [3]如果访问测试时看不到相应页面,可通过下面命令进行测试: [root@linuxprobe ~]# yum -y install elinks^C [root@linuxprobe ~]# elinks http://www.virtual.host/virtual.php 9、创建SSL证书 创建自己的SSL证书。但是,如果您使用您的服务器作为业务,最好购买和使用来自Verisigh的正式证书等。 [root@linuxprobe ~]# cd /etc/pki/tls/cert cert.pem certs/ [root@linuxprobe ~]# cd /etc/pki/tls/certs/ [root@linuxprobe certs]# make server.key umask 77 ; \ /usr/bin/openssl genrsa -aes128 2048 > server.key Generating RSA private key, 2048 bit long modulus ...............................................................+++ ....................................................................................................+++ e is 65537 (0x10001) Enter pass phrase: Verifying - Enter pass phrase: [root@linuxprobe certs]# openssl rsa -in server.key -out server.key Enter pass phrase for server.key: writing RSA key [root@linuxprobe certs]# make server.csr umask 77 ; \ /usr/bin/openssl req -utf8 -new -key server.key -out server.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:CN #国家后缀 State or Province Name (full name) []:Shanghai #省 Locality Name (eg, city) [Default City]:Shanghai #市 Organization Name (eg, company) [Default Company Ltd]:LinuxProbe #公司 Organizational Unit Name (eg, section) []:DevOps #部门 Common Name (eg, your name or your server's hostname) []:linuxprobe.org #主机名 Email Address []:root@linuxprobe.org #邮箱 Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: #默认 An optional company name []: #默认 [root@linuxprobe certs]# openssl x509 -in server.csr -out server.crt -req -signkey server.key -days 3650 Signature ok subject=/C=CN/ST=Shanghai/L=Shanghai/O=LinuxProbe/OU=DevOps/CN=linuxprobe.org/emailAddress=root@linuxprobe.org Getting Private key 10、配置SSL [1] 配置SSL. [root@linuxprobe ~]# yum -y install mod_ssl [root@linuxprobe ~]# vi /etc/httpd/conf.d/ssl.conf # line 59: 取消注释 DocumentRoot "/var/www/html" # line 60: 取消注释,定义域名 ServerName linuxprobe.org:443 # line 75: 改变SSLProtocol SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2 # line 100: 改成刚刚创建的server.crt SSLCertificateFile /etc/pki/tls/certs/server.crt # line 107: 改成刚刚创建的server.key SSLCertificateKeyFile /etc/pki/tls/certs/server.key [root@www ~]# systemctl restart httpd [2] 如果Firewalld正在运行,请允许HTTPS服务。 HTTPS使用443 / TCP [root@www ~]# firewall-cmd --add-service=https --permanent success [root@www ~]# firewall-cmd --reload success [3] 使用Web浏览器通过HTTPS从客户端计算机访问测试页。下面的示例是Fiorefix。显示以下屏幕,因为证书是自己创建的,但它没有ploblem,继续下一步。 11、启用基本身份验证 启用基本身份验证以限制特定网页的访问 [1]例如,在目录[/var/www/html/auth-basic]下设置基本身份验证设置。 [root@linuxprobe ~]# vi /etc/httpd/conf.d/auth_basic.conf # 创建新配置文件 <Directory /var/www/html/auth-basic> AuthType Basic AuthName "Basic Authentication" AuthUserFile /etc/httpd/conf/.htpasswd require valid-user </Directory> # 添加用户:使用“-c”创建新文件(仅为初始注册添加“-c”选项) [root@linuxprobe ~]# htpasswd -c /etc/httpd/conf/.htpasswd wang New password: # set password Re-type new password: # confirm Adding password for user wang [root@linuxprobe ~]# systemctl restart httpd [root@linuxprobe ~]# mkdir /var/www/html/auth-basic [root@linuxprobe ~]# vi /var/www/html/auth-basic/index.html # create a test page <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: wanger;"> Test Page for Basic Auth </div> </body> </html> [2] 使用Web浏览器从客户端计算机访问测试页。然后需要认证,如下所示作为设置,用在[1]中添加的用户回答 [3] 访问成功 12、基本Auth + PAM 限制特定网页上的访问,并使用OS用户通过SSL连接进行身份验证 [2] 例如,在[/var/www/html/auth-pam]目录下设置Basic Auth。 # install from EPEL [root@linuxprobe ~]# yum --enablerepo=epel -y install mod_authnz_external pwauth [root@linuxprobe ~]# vi /etc/httpd/conf.d/authnz_external.conf # add to the end <Directory /var/www/html/auth-pam> SSLRequireSSL AuthType Basic AuthName "PAM Authentication" AuthBasicProvider external AuthExternal pwauth require valid-user </Directory> [root@linuxprobe ~]# mkdir /var/www/html/auth-pam [root@linuxprobe ~]# vi /var/www/html/auth-pam/index.html # create a test page <html> <body> <div style="width: 100%; font-size: 40px; font-weight: bold; text-align: center;"> Test Page for PAM Auth </div> </body> </html> [root@linuxprobe ~]# systemctl restart httpd [3] 在客户端上使用Web浏览器访问测试页面https://linuxprobe.org/auth-pam/,并与操作系统上的用户进行身份验证。 13、使用WebDAV 下面是使用SSL连接配置WebDAV设置的示例 [1] 创建证书,请参照上文所述 [2] 例如,创建一个目录[webdav],它使得可以仅通过SSL连接到WebDAV目录。 [root@linuxprobe ~]# mkdir /home/webdav [root@linuxprobe ~]# chown apache. /home/webdav [root@linuxprobe ~]# chmod 770 /home/webdav [root@linuxprobe ~]# vi /etc/httpd/conf.d/webdav.conf # create new DavLockDB "/tmp/DavLock" Alias /webdav /home/webdav <Location /webdav> DAV On SSLRequireSSL Options None AuthType Basic AuthName WebDAV AuthUserFile /etc/httpd/conf/.htpasswd <RequireAny> Require method GET POST OPTIONS Require valid-user </RequireAny> </Location> # 添加用户:使用“-c”创建新文件(仅为初始注册添加“-c”选项) [root@linuxprobe ~]# htpasswd -c /etc/httpd/conf/.htpasswd wang New password: # set password Re-type new password: Adding password for user wang # **注意:用户wang的htpasswd已经创建过,不需要重复创建** [root@linuxprobe ~]# systemctl restart httpd [3] 如果启用了SELinux,请更改以下规则。 [root@linuxprobe ~]# chcon -R -t httpd_sys_rw_content_t /home/webdav [root@linuxprobe ~]# semanage fcontext -a -t httpd_sys_rw_content_t /home/webdav [4] 这是PC上的WebDAV客户端的设置(Windows 10)。 下载“CarotDAV”,这是一个免费的WebDAV客户端,从以下网站⇒ http://www.rei.to/carotdav_en.html ,下载后,安装并启动CarotDAV,然后显示以下屏幕,单击“文件”按钮并选择“WebDAV”。 [9] 到webdav目录下创建测试目录和文件 [root@linuxprobe tmp]# cd /home/webdav/ [root@linuxprobe webdav]# mkdir linuxprobe [root@linuxprobe webdav]# mkdir linuxcool [root@linuxprobe webdav]# touch vdevops.txt [root@linuxprobe webdav]# touch linuxcool.txt
由于Nginx本身的一些优点,轻量,开源,易用,越来越多的公司使用nginx作为自己公司的web应用服务器,本文详细介绍nginx源码安装的同时并对nginx进行优化配置。 Nginx编译前的优化 [root@linuxprobe ~]# wget http://nginx.org/download/nginx-1.10.1.tar.gz [root@linuxprobe ~]# tar xvf nginx-1.10.1.tar.gz -C /usr/local/src/ [root@linuxprobe ~]# cd /usr/local/src/nginx-1.10.1/ 编译前的优化主要是用来修改程序名等等,例如: [root@linuxprobe nginx-1.10.1]# curl -I http://www.baidu.com Server: bfe/1.0.8.14 [root@linuxprobe nginx-1.10.1]# curl -I http://www.sina.com.cn Server: nginx [root@linuxprobe nginx-1.10.1]# curl -I http://www.linuxprobe.com HTTP/1.1 200 OK Server: nginx/1.10.1 #我们目标是将nginx更改名字 Content-Type: text/html; charset=UTF-8 Connection: keep-alive X-Powered-By: PHP/5.6.29 Set-Cookie: PHPSESSID=smm0i6u4f9v7bj0gove79ja1g7; path=/ Cache-Control: no-cache Date: Mon, 07 Seq 2016 06:09:11 GMT [root@linuxprobe nginx-1.10.1]# vim src/core/nginx.h 目的更改源码隐藏软件名称和版本号 #define NGINX_VERSION "nginx_stable" #此行修改的是你想要的版本号 #define NGINX_VER "linuxprobe/" NGINX_VERSION #此行修改的是你想修改的软件名称 [root@linuxprobe nginx-1.10.1]# vim +49 src/http/ngx_http_header_filter_module.c 修改HTTP头信息中的connection字段,防止回显具体版本号 拓展:通用http头域 通用头域包含请求和响应消息都支持的头域,通用头域包含Cache-Control、 Connection、Date、Pragma、Transfer-Encoding、Upgrade、Via。对通用头域的扩展要求通讯双方都支持此扩展,如果存在不支持的通用头域,一般将会作为实体头域处理。那么也就是说有部分设备,或者是软件,能获取到connection,部分不能,要隐藏就要彻底! static char ngx_http_server_string[] = "Server: LinuxprobeWeb" CRLF; [root@linuxprobe nginx-1.10.1]# vim +29 src/http/ngx_http_special_response.c 定义了http错误码的返回 有时候我们页面程序出现错误,Nginx会代我们返回相应的错误代码,回显的时候,会带上nginx和版本号,我们把他隐藏起来 static u_char ngx_http_error_full_tail[] = "<hr><center>" NGINX_VER "</center>" CRLF "</body>" CRLF "</html>" CRLF static u_char ngx_http_error_tail[] = "<hr><center>LinuxprobeWeb</center>" CRLF "</body>" CRLF "</html>" CRLF Nginx正式安装 一键安装相关依赖包 [root@linuxprobe nginx-1.10.1]# yum install gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel -y 安装pcre依赖 #本地下载pcre上传到服务器 [root@linuxprobe]# tar zxvf /usr/local/src/pcre-8.36.tar.gz -C /usr/local/src/ [root@linuxprobe nginx-1.10.1]# cd /usr/local/src/pcre-8.36 [root@linuxprobe nginx-1.10.1]# ./configure && make && make install [root@linuxprobe nginx-1.10.1]# ./configure --prefix=/usr/local/nginx --with-http_dav_module --with-http_stub_status_module --with-http_addition_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-pcre=/usr/local/src/pcre-8.36 --with-openssl=/usr/include/openssl 注意:TCP_FASTOPEN 只在 3.7.1 以及更新的 Linux 内核版本才支持 --with-http_dav_module #启用支持(增加PUT,DELETE,MKCOL:创建集合,COPY和MOVE方法)默认关闭,需要编译开启 --with-http_stub_status_module #启用支持(获取Nginx上次启动以来的工作状态) --with-http_addition_module #启用支持(作为一个输出过滤器,支持不完全缓冲,分部分相应请求) --with-http_sub_module #启用支持(允许一些其他文本替换Nginx相应中的一些文本) --with-http_flv_module #启用支持(提供支持flv视频文件支持) --with-http_mp4_module #启用支持(提供支持mp4视频文件支持,提供伪流媒体服务端支持) --with-pcre=/usr/local/src/pcre-8.36 #需要注意,这里指的是源码,用#./configure --help |grep pcre查看帮助 [root@linuxprobe nginx-1.10.1]# make && make install #Nginx安装路径。如果没有指定,默认为 /usr/local/nginx。 --prefix=PATH #Nginx可执行文件安装路径。只能安装时指定,如果没有指定,默认为PATH/sbin/nginx。 --sbin-path=PATH #在没有给定-c选项下默认的nginx.conf的路径。如果没有指定,默认为PATH/conf/nginx.conf。 --conf-path=PATH #在nginx.conf中没有指定pid指令的情况下,默认的nginx.pid的路径。如果没有指定,默认为 PATH/logs/nginx.pid。 --pid-path=PATH #nginx.lock文件的路径。 --lock-path=PATH #在nginx.conf中没有指定error_log指令的情况下,默认的错误日志的路径。如果没有指定,默认为 PATH/logs/error.log。 --error-log-path=PATH #在nginx.conf中没有指定access_log指令的情况下,默认的访问日志的路径。如果没有指定,默认为 PATH/logs/access.log。 --http-log-path=PATH #在nginx.conf中没有指定user指令的情况下,默认的nginx使用的用户。如果没有指定,默认为 nobody。 --user=USER #在nginx.conf中没有指定user指令的情况下,默认的nginx使用的组。如果没有指定,默认为 nobody。 --group=GROUP #指定编译的目录 --builddir=DIR #启用 rtsig 模块 --with-rtsig_module #允许或不允许开启SELECT模式,如果configure没有找到合适的模式,比如,kqueue(sun os)、epoll(linux kenel 2.6+)、rtsig(实时信号) --with-select_module(--without-select_module) #允许或不允许开启POLL模式,如果没有合适的,则开启该模式。 --with-poll_module(--without-poll_module) #开启HTTP SSL模块,使NGINX可以支持HTTPS请求。这个模块需要已经安装了OPENSSL,在DEBIAN上是libssl-dev --with-http_ssl_module #启用ngx_http_ssl_module --with-http_realip_module #启用 ngx_http_realip_module --with-http_addition_module #启用 ngx_http_addition_module --with-http_sub_module #启用 ngx_http_sub_module --with-http_dav_module #启用 ngx_http_dav_module --with-http_flv_module #启用 ngx_http_flv_module --with-http_stub_status_module #启用 "server status" 页 --without-http_charset_module #禁用 ngx_http_charset_module --without-http_gzip_module #禁用 ngx_http_gzip_module. 如果启用,需要 zlib 。 --without-http_ssi_module #禁用 ngx_http_ssi_module --without-http_userid_module #禁用 ngx_http_userid_module --without-http_access_module #禁用 ngx_http_access_module --without-http_auth_basic_module #禁用 ngx_http_auth_basic_module --without-http_autoindex_module #禁用 ngx_http_autoindex_module --without-http_geo_module #禁用 ngx_http_geo_module --without-http_map_module #禁用 ngx_http_map_module --without-http_referer_module #禁用 ngx_http_referer_module --without-http_rewrite_module #禁用 ngx_http_rewrite_module. 如果启用需要 PCRE 。 --without-http_proxy_module #禁用 ngx_http_proxy_module --without-http_fastcgi_module #禁用 ngx_http_fastcgi_module --without-http_memcached_module #禁用 ngx_http_memcached_module --without-http_limit_zone_module #禁用 ngx_http_limit_zone_module --without-http_empty_gif_module #禁用 ngx_http_empty_gif_module --without-http_browser_module #禁用 ngx_http_browser_module --without-http_upstream_ip_hash_module #禁用 ngx_http_upstream_ip_hash_module --with-http_perl_module - #启用 ngx_http_perl_module --with-perl_modules_path=PATH #指定 perl 模块的路径 --with-perl=PATH #指定 perl 执行文件的路径 --http-log-path=PATH #Set path to the http access log --http-client-body-temp-path=PATH #Set path to the http client request body temporary files --http-proxy-temp-path=PATH #Set path to the http proxy temporary files --http-fastcgi-temp-path=PATH #Set path to the http fastcgi temporary files --without-http #禁用 HTTP server --with-mail #启用 IMAP4/POP3/SMTP 代理模块 --with-mail_ssl_module #启用 ngx_mail_ssl_module --with-cc=PATH #指定 C 编译器的路径 --with-cpp=PATH #指定 C 预处理器的路径 --with-cc-opt=OPTIONS # --with-ld-opt=OPTIONS #Additional parameters passed to the linker. With the use of the system library PCRE in FreeBSD, it is necessary to indicate --with-ld-opt="-L /usr/local/lib". --with-cpu-opt=CPU #为特定的CPU编译,有效的值包括:pentium, pentiumpro, pentium3, pentium4, athlon, opteron, amd64, sparc32, sparc64, ppc64 --without-pcre #禁止PCRE库的使用。同时也会禁止 HTTP rewrite 模块。在 "location" 配置指令中的正则表达式也需要 PCRE 。 --with-pcre=DIR #指定 PCRE 库的源代码的路径。 --with-pcre-opt=OPTIONS #设置PCRE的额外编译选项。 --with-md5=DIR #使用MD5汇编源码。 --with-md5-opt=OPTIONS #Set additional options for md5 building. --with-md5-asm #Use md5 assembler sources. --with-sha1=DIR #Set path to sha1 library sources. --with-sha1-opt=OPTIONS #Set additional options for sha1 building. --with-sha1-asm #Use sha1 assembler sources. --with-zlib=DIR #Set path to zlib library sources. --with-zlib-opt=OPTIONS #Set additional options for zlib building. --with-zlib-asm=CPU #Use zlib assembler sources optimized for specified CPU, valid values are: pentium, pentiumpro --with-openssl=DIR #Set path to OpenSSL library sources --with-openssl-opt=OPTIONS #Set additional options for OpenSSL building --with-debug #启用调试日志 --add-module=PATH #Add in a third-party module found in directory PATH 启动nginx [root@linuxprobe nginx-1.10.1]# /usr/local/nginx/sbin/nginx [root@linuxprobe nginx-1.10.1]# netstat -antup | grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 52553/nginx 测试是否隐藏了版本和软件名 [root@linuxprobe nginx-1.10.1]# cd [root@linuxprobe ~]# curl -I http://127.0.0.1 错误代码测试(尽量使用firefox或者类360浏览器) Nginx运行用户 [root@linuxprobe~]# useradd -M -s /sbin/nologin nginx //修改nginx默认运行用户 [root@linuxprobe ~]# ps -aux | grep nginx //默认是nobody用户 nobody 52554 0.0 0.1 22660 1568 ? S 14:39 0:00 nginx: worker process [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf user nginx; [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload [root@linuxprobe ~]# ps -aux | grep nginx nginx 52555 0.0 0.1 22660 1568 ? S 14:39 0:00 nginx: worker process 在这里我们还可以看到在查看的时候,work进程是nginx用户了,但是master进程还是root 其中,master是监控进程,也叫主进程,work是工作进程,部分还有cache相关进程,关系如图: 所以我们可以master监控进程使用root,可以是降级使用普通用户,如果都是用普用户,注意编译安装的时候,是用普通用户执行,sudo方式操作!可以直接理解为master是管理员,work进程才是为用户提供服务的! Nginx运行进程个数,一般我们设置CPU的核心或者核心数x2,如果你不了解,top命令之后按1也可以看出来(一般直接追到线程即可) [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf worker_processes 2; [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload [root@linuxprobe ~]# ps -axu | grep nginx nginx 52686 0.0 0.1 22668 1300 ? S 15:10 0:00 nginx: worker process nginx 52687 0.0 0.1 22668 1376 ? S 15:10 0:00 nginx: worker process Nginx运行CPU亲和力(这个要根据你的CPU线程数配置) 比如4核4线程配置 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; 比如8核8线程配置 worker_processes 8; worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000; 那么如果我是4线程的CPU,我只想跑两个进程呢? worker_processes 2; worker_cpu_affinity 0101 1010; 意思就似乎我开启了第一个和第三个内核,第二个和第四个内核,两个进程分别在这两个组合上轮询!worker_processes最多开启8个,8个以上性能提升不会再提升了,而且稳定性变得更低,所以8个进程够用了。 Nginx最多可以打开文件数 worker_rlimit_nofile 65535; 这个指令是指当一个nginx进程打开的最多文件描述符数目,理论值应该是最多打开文件数(ulimit -n)与nginx进程数相除,但是nginx分配请求并不是那么均匀,所以最好与ulimit -n的值保持一致。 Nginx事件处理模型 events { use epoll; worker_connections 1024; 知道在linux下nginx采用epoll事件模型,处理效率高,关于epoll的时间处理其他只是,可以自行百度,了解即可! Work_connections是单个进程允许客户端最大连接数,这个数值一般根据服务器性能和内存来制定,也就是单个进程最大连接数,实际最大值就是work进程数乘以这个数,如何设置,可以根据设置一个进程启动所占内存,top -u nginx,但是实际我们填入一个65535,足够了,这些都算并发值,一个网站的并发达到这么大的数量,也算一个大站了! 开启高效传输模式 http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; Include mime.types; 媒体类型 default_type application/octet-stream; 默认媒体类型足够 sendfile on; 开启高效文件传输模式,sendfile指令指定nginx是否调用sendfile函数来输出文件,对于普通应用设为 on,如果用来进行下载等应用磁盘IO重负载应用,可设置为off,以平衡磁盘与网络I/O处理速度,降低系统的负载。注意:如果图片显示不正常把这个改成off。 tcp_nopush on; 必须在sendfile开启模式才有效,防止网路阻塞,积极的减少网络报文段的数量 连接超时时间 主要目的是保护服务器资源,CPU,内存,控制连接数,因为建立连接也是需要消耗资源的,TCP的三次握手四次挥手等,我们一般断掉的是那些建立连接但是不做事儿,也就是我建立了链接开始,但是后续的握手过程没有进行,那么我们的链接处于等待状态的,全部断掉! 同时我们也希望php建议短链接,消耗资源少 Java建议长链接,消耗资源少 keepalive_timeout 60; tcp_nodelay on; client_header_timeout 15; client_body_timeout 15; send_timeout 15; keepalived_timeout 客户端连接保持会话超时时间,超过这个时间,服务器断开这个链接 tcp_nodelay;也是防止网络阻塞,不过要包涵在keepalived参数才有效 client_header_timeout 客户端请求头读取超时时间,如果超过这个时间没有发送任何数据,nginx将返回request time out的错误 client_body_timeout 客户端求主体超时时间,超过这个时间没有发送任何数据,和上面一样的错误提示 send_timeout 响应客户端超时时间,这个超时时间仅限于两个活动之间的时间,如果超哥这个时间,客户端没有任何活动,nginx关闭连接 文件上传大小限制 我们知道PHP可以修改上传文件大小限制,nginx也可以修改 http { client_max_body_size 10m; Fastcgi调优 Nginx没有配置factcgi,你使用nginx是一个失败的方法,配置之前。了解几个概念: Cache: 写入缓存区 Buffer: 读取缓存区 Fastcgi 是静态服务和动态服务的一个接口 fastcgi_connect_timeout 300; #指定链接到后端FastCGI的超时时间。 fastcgi_send_timeout 300; #向FastCGI传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI传送请求的超时时间。 fastcgi_read_timeout 300; #指定接收FastCGI应答的超时时间,这个值是指已经完成两次握手后接收FastCGI应答的超时时间。 fastcgi_buffer_size 64k; #指定读取FastCGI应答第一部分需要用多大的缓冲区,这个值表示将使用1个64KB的缓冲区读取应答的第一部分(应答头),可以设置为gastcgi_buffers选项指定的缓冲区大小。 fastcgi_buffers 4 64k; #指定本地需要用多少和多大的缓冲区来缓冲FastCGI的应答请求,如果一个php脚本所产生的页面大小为256KB,那么会分配4个64KB的缓冲区来缓存,如果页面大小大于256KB,那么大于256KB的部分会缓存到fastcgi_temp指定的路径中,但是这并不是好方法,因为内存中的数据处理速度要快于磁盘。一般这个值应该为站点中php脚本所产生的页面大小的中间值,如果站点大部分脚本所产生的页面大小为256KB,那么可以把这个值设置为“8 16K”、“4 64k”等。 fastcgi_busy_buffers_size 128k; #建议设置为fastcgi_buffer的两倍,繁忙时候的buffer fastcgi_temp_file_write_size 128k; #在写入fastcgi_temp_path时将用多大的数据库,默认值是fastcgi_buffers的两倍,设置上述数值设置小时若负载上来时可能报502Bad Gateway fastcgi_cache aniu_ngnix; #表示开启FastCGI缓存并为其指定一个名称。开启缓存非常有用,可以有效降低CPU的负载,并且防止502的错误放生,但是开启缓存也可能会引起其他问题,要很据具体情况选择 fastcgi_cache_valid 200 302 1h; #用来指定应答代码的缓存时间,实例中的值表示将2000和302应答缓存一小时,要和fastcgi_cache配合使用 fastcgi_cache_valid 301 1d; #将301应答缓存一天 fastcgi_cache_valid any 1m; #将其他应答缓存为1分钟 fastcgi_cache_min_uses 1; #请求的数量 fastcgi_cache_path #定义缓存的路径 修改nginx.conf配置文件,在http标签中添加如下: fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; #fastcgi_temp_path /data/ngx_fcgi_tmp; fastcgi_cache_path /opt/ngx_fcgi_cache levels=2:2 keys_zone=ngx_fcgi_cache:512m inactive=1d max_size=40g; 缓存路径,levels目录层次2级,定义了一个存储区域名字,缓存大小,不活动的数据在缓存中多长时间,目录总大小 在server location标签添加如下: location ~ .*\.(php|php5)?$ fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; fastcgi_cache ngx_fcgi_cache; fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m; fastcgi_cache_min_uses 1; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_cache_key http://$host$request_uri; fastcgi cache官方文档:http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache gzip调优 使用gzip压缩功能,可能为我们节约带宽,加快传输速度,有更好的体验,也为我们节约成本,所以说这是一个重点 Nginx启用压缩功能需要你来ngx_http_gzip_module模块,apache使用的是mod_deflate 一般我们需要压缩的内容有:文本,js,html,css,对于图片,视频,flash什么的不压缩,同时也要注意,我们使用gzip的功能是需要消耗CPU的! gzip on; #开启压缩功能 gzip_min_length 1k; #设置允许压缩的页面最小字节数,页面字节数从header头的Content-Length中获取,默认值是0,不管页面多大都进行压缩,建议设置成大于1K,如果小与1K可能会越压越大。 gzip_buffers 4 32k; #压缩缓冲区大小,表示申请4个单位为32K的内存作为压缩结果流缓存,默认值是申请与原始数据大小相同的内存空间来存储gzip压缩结果。 gzip_http_version 1.1; #压缩版本(默认1.1,前端为squid2.5时使用1.0)用于设置识别HTTP协议版本,默认是1.1,目前大部分浏览器已经支持GZIP解压,使用默认即可 gzip_comp_level 9; #压缩比例,用来指定GZIP压缩比,1压缩比最小,处理速度最快,9压缩比最大,传输速度快,但是处理慢,也比较消耗CPU资源。 gzip_types text/css text/xml application/javascript; #用来指定压缩的类型,‘text/html’类型总是会被压缩。 gzip_vary on; #vary header支持,改选项可以让前端的缓存服务器缓存经过GZIP压缩的页面,例如用Squid缓存经过nginx压缩的数据 那么配置压缩的过程中,会有一下参数 gzip on; gzip_min_length 1k; gzip_buffers 4 32k; gzip_http_version 1.1; gzip_comp_level 9; gzip_types text/plain application/javascript application/x-javascript text/javascript text/css application/xml application/xml+rss; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_disable "MSIE [1-6]\."; expires缓存调优 缓存,主要针对于图片,css,js等元素更改机会比较少的情况下使用,特别是图片,占用带宽大,我们完全可以设置图片在浏览器本地缓存365d,css,js,html可以缓存个10来天,这样用户第一次打开加载慢一点,第二次,就非常快乐!缓存的时候,我们需要将需要缓存的拓展名列出来! Expires缓存配置在server字段里面 location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ expires 3650d; location ~ .*\.(js|css)?$ expires 30d; 同时也可以对目录及其进行判断: location ~ ^/(images|javascript|js|css|flash|media|static)/ { expires 360d; location ~(robots.txt) { expires 7d; break; expire功能优点 (1)expires可以降低网站购买的带宽,节约成本 (2)同时提升用户访问体验 (3)减轻服务的压力,节约服务器成本,甚至可以节约人力成本,是web服务非常重要的功能。 expire功能缺点: 被缓存的页面或数据更新了,用户看到的可能还是旧的内容,反而影响用户体验。 解决办法: 第一个 缩短缓存时间,例如:1天,不彻底,除非更新频率大于1天 第二个 对缓存的对象改名 a.图片,附件一般不会被用户修改,如果用户修改了,实际上也是更改文件名重新传了而已 b.网站升级对于js,css元素,一般可以改名,把css,js,推送到CDN。 网站不希望被缓存的内容 1)广告图片 2)网站流量统计工具 3)更新频繁的文件(google的logo) [root@linuxprobe ~]# cd /usr/local/nginx/logs/ 日志优化的目的,是为了一天日志一压缩,焚天存放,超过10天的删除 创建日志切割脚本 //每天日志分割脚本 [root@linuxprobe logs]# vim cut_nginx_log.sh #!/bin/bash ###################################### #function:cut nginx log files #author: shaon ###################################### #set the path to nginx log files log_files_path="/usr/local/nginx/logs" log_files_dir=${log_files_path}/$(date -d "yesterday" +"%Y")/$(date -d "yesterday" +"%m") log_files_dir=${log_files_path}/$(date -d "yesterday" +"%Y")/$(date -d "yesterday" +"%m") #set nginx log files you want to cut log_files_name=(access error) #set the path to nginx. nginx_sbin="/usr/local/nginx/sbin/nginx" #Set how long you want to save save_days=30 ############################################ #Please do not modify the following script # ############################################ mkdir -p $log_files_dir log_files_num=${#log_files_name[@]} #cut nginx log files for((i=0;i<$log_files_num;i++));do mv ${log_files_path}/${log_files_name[i]}.log ${log_files_dir}/${log_files_name[i]}_$(date -d "yesterday" +"%Y%m%d").log done ays -exec rm -rf {} #delete 30 days ago nginx log files find $log_files_path -mtime +$save_days -exec rm -rf {} \; 健康检查的日志,不输入到log中,这些日志没有意义,我们分析的话只需要分析访问日志,看看一些页面链接,如200,301,404的状态吗,在SEO中很重要,而且我们统计PV是页面计算,这些都没有意义,反而消耗了磁盘IO,降低了服务器性能,我们可以屏蔽这些如图片,js,css这些不宜变化的内容 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf location ~ .*\.(js|jpg|jpeg|JPG|JPEG|css|bmp|gif|GIF)$ { access_log off; 日志目录权限优化 [root@linuxprobe ~]# chown -R root:root /usr/local/nginx/logs [root@linuxprobe ~]# chmod 700 /usr/local/nginx/logs 日志格式优化 #vim /usr/local/nginx/conf/nginx.conf log_format access ‘$remote_addr – $remote_user [$time_local] “$request” ‘‘$status $body_bytes_sent “$http_referer” ‘‘”$http_user_agent” $http_x_forwarded_for’; 其中,各个字段的含义如下: 1.$remote_addr 与$http_x_forwarded_for 用以记录客户端的ip地址; 2.$remote_user : 用来记录客户端用户名称; 3.$time_local : 用来记录访问时间与时区; 4.$request : 用来记录请求的url与http协议; 5.$status : 用来记录请求状态;成功是200, 6.$body_bytes_s ent :记录发送给客户端文件主体内容大小; 7.$http_referer : 用来记录从那个页面链接访问过来的; 8.$http_user_agent : 记录客户端浏览器的相关信息; 目录文件访问控制 主要用在禁止目录下指定文件被访问,当然也可以禁止所有文件被访问!一般什么情况下用?比如是有存储共享,这些文件本来都只是一下资源文件,那么这些资源文件就不允许被执行,如sh.py,pl,php等等 例如:禁止访问images下面的php程序文件 location ~ ^/images/.*\.(php|php5|.sh|.py|.py)$ { deny all; [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload [root@linuxprobe ~]# mkdir /usr/local/nginx/html/images [root@linuxprobe ~]# echo "" > /usr/local/nginx/html/images/index.php 多目录组合配置方法 location ~ ^/images/(attachment|avatar)/.*\.(php|php5|.sh|.py|.py)$ { deny all; 配置nginx禁止访问*.txt文件 [root@linuxprobe ~]# echo "hello,linuxprobe" > /usr/local/nginx/html/test.txt 配置规则,禁止访问 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf //server字段中 location ~* \.(txt|doc)$ { if ( -f $request_filename) { root /usr/local/nginx/html; break; deny all; [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload 当然,可以重定向到某一个URL [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf location ~* \.(txt|doc)$ { if ( -f $request_filename) { root /usr/local/nginx/html; rewrite ^/(.*)$ http://www.linuxprobe.com last; break; 对目录进行限制的方法 [root@linuxprobe ~]# mkdir -p /usr/local/nginx/html/{linuxprobe,1mcloud} [root@linuxprobe ~]# echo linuxprobe > /usr/local/nginx/html/linuxprobe/index.html [root@linuxprobe ~]# echo 1mcloud > /usr/local/nginx/html/1mcloud/index.html [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf location /linuxprobe/ { return 404 ; } location /1mcloud/ { return 403 ; } 测试返回结果 上面是直接给了反馈的状态吗,也可以通过匹配deny all方式做 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf location ~ ^/(linuxprobe)/ { deny all; [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload 来源访问控制 这个需要ngx_http_access_module模块支持,不过,默认会安装 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf //写法类似Apache location ~ ^/(linuxprobe)/ { allow 192.168.1.0/24; deny all; 接着上面的实验,就可以访问了,下面是针对整个网站的写法,对/限制就OK location / { allow 192.168.1.0/24; deny all; 当然可以写IP,可以写IP段,但是注意次序,上下匹配 同时,也可以通过if语句控制,给以友好的错误提示 if ( $remote_addr = 10.1.1.55 ) { return 404; #此处remote_addr地址为当前编辑文档的系统ip地址 IP和301优化 有时候,我们发现访问网站的时候,使用IP也是可以得,我们可以把这一层给屏蔽掉,让其直接反馈给403,也可以做跳转 跳转的做法: server { listen 80 default_server; server_name _; rewrite ^ http://www.linuxprobe.com$request_uri?; 403反馈的做法 server { listen 80 default_server; server_name _; return 403; 301跳转的做法 #如我们域名一般在解析的过程中,linuxprobe.com一般会跳转到www.linuxprobe.com,记住修改本地hosts server { listen 80; root /usr/share/nginx/html/; server_name www.linuxprobe.com linuxprobe.com; if ($host = 'a.com' ) { rewrite ^/(.*)$ www.linuxprobe.com/$1 permanent; 防止别人直接从你网站引用图片等链接,消耗了你的资源和网络流量,那么我们的解决办法由几种: 1:水印,品牌宣传,你的带宽,服务器足够 2:防火墙,直接控制,前提是你知道IP来源 3:防盗链策略 下面的方法是直接给予404的错误提示 location ~* \.(jpg|gif|png|swf|flv|wma|wmv|asf|mp3|mmf|zip|rar)$ { valid_referers none blocked *.linuxprobe,com linuxprobe.com; if ($invalid_referer) { return 404; 同时,我们也可以设置一个独有的,图片比较小的,来做rewrite跳转 location ~* \.(jpg|gif|png|swf|flv|wma|wmv|asf|mp3|mmf|zip|rar)$ { valid_referers none blocked *.a.com a.com; if ($invalid_referer) { rewrite ^/ http://www.linuxprobe.com/img/nolink.png; 错误页面的提示 对于自定义的错误页面,我们只需要将errorpage写入到配置文件 error_page 404 /404.html; 内部身份验证 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf location /linuxprobe/ { auth_basic "haha"; auth_basic_user_file /usr/local/nginx/conf/passwd; [root@linuxprobe ~]# yum -y install httpd-tools [root@linuxprobe ~]# htpasswd -cb /usr/local/nginx/conf/passwd linuxprobe 211212 [root@linuxprobe ~]# chmod 400 /usr/local/nginx/conf/passwd [root@linuxprobe ~]# chown nginx /usr/local/nginx/conf/passwd [root@linuxprobe ~]# /usr/local/nginx/sbin/nginx -s reload 防止DDOS攻击 通过使用limit_conn_zone进行控制单个IP或者域名的访问次数 [root@linuxprobe ~]# vim /usr/local/nginx/conf/nginx.conf http字段中配置 limit_conn_zone $binary_remote_addr zone=addr:10m; server的location字段配置 location / { root html; limit_conn addr 1; #在其他机器上面进行并发测试 [root@linuxprobe ~]# webbench -c 5000 -t 120 http://10.1.1.83/linuxprobe/index.html #webbench安装请参考http://blog.sina.com.cn/s/blog_87113ac20102wag5.html #nginx匹配符介绍 = 开头表示精确匹配 ^~ 开头表示uri以某个常规字符串开头,理解为匹配 url路径即可。nginx不对url做编码,因此请求为/static/20%/aa,可以被规则^~ /static/ /aa匹配到(注意是空格)。 ~ 开头表示区分大小写的正则匹配 ~* 开头表示不区分大小写的正则匹配 !~和!~* 分别为区分大小写不匹配及不区分大小写不匹配的正则 / 通用匹配,任何请求都会匹配到。 多个location配置的情况下匹配顺序为(参考资料而来,还未实际验证,试试就知道了,不必拘泥,仅供参考): 首先匹配 =,其次匹配^~, 其次是按文件中顺序的正则匹配,最后是交给 / 通用匹配。当有匹配成功时候,停止匹配,按当前匹配规则处理请求。
Tomcat在使用的过程中会遇到很多报错,有些是程序的报错,但还有一部分是tomcat本身的报错,我们可以通过优化tomcat的初始配置来提高tomcat的性能。Tomcat的优化主要体现在两方面:内存、并发连接数。 1、内存优化: 优化内存,主要是在bin/catalina.bat/sh 配置文件中进行。linux上,在catalina.sh中添加: JAVA_OPTS="-server -Xms1G -Xmx2G -Xss256K -Djava.awt.headless=true -Dfile.encoding=utf-8 -XX:MaxPermSize=256m -XX:PermSize=128M -XX:MaxPermSize=256M" • -server:启用jdk的server版本。 • -Xms:虚拟机初始化时的最小堆内存。 • -Xmx:虚拟机可使用的最大堆内存。 #-Xms与-Xmx设成一样的值,避免JVM因为频繁的GC导致性能大起大落 • -XX:PermSize:设置非堆内存初始值,默认是物理内存的1/64。 • -XX:MaxNewSize:新生代占整个堆内存的最大值。 • -XX:MaxPermSize:Perm(俗称方法区)占整个堆内存的最大值,也称内存最大永久保留区域。 1)错误提示:java.lang.OutOfMemoryError:Java heap space Tomcat默认可以使用的内存为128MB,在较大型的应用项目中,这点内存是不够的,有可能导致系统无法运行。常见的问题是报Tomcat内存溢出错误,Outof Memory(系统内存不足)的异常,从而导致客户端显示500错误,一般调整Tomcat的-Xms和-Xmx即可解决问题,通常将-Xms和-Xmx设置成一样,堆的最大值设置为物理可用内存的最大值的80%。 set JAVA_OPTS=-Xms512m-Xmx512m 2)错误提示:java.lang.OutOfMemoryError: PermGenspace PermGenspace的全称是Permanent Generationspace,是指内存的永久保存区域,这块内存主要是被JVM存放Class和Meta信息的,Class在被Loader时就会被放到PermGenspace中,它和存放类实例(Instance)的Heap区域不同,GC(Garbage Collection)不会在主程序运行期对PermGenspace进行清理,所以如果你的应用中有很CLASS的话,就很可能出现PermGen space错误,这种错误常见在web服务器对JSP进行precompile的时候。如果你的WEB APP下都用了大量的第三方jar, 其大小超过了jvm默认的大小(4M)那么就会产生此错误信息了。解决方法: setJAVA_OPTS=-XX:PermSize=128M 3)在使用-Xms和-Xmx调整tomcat的堆大小时,还需要考虑垃圾回收机制。如果系统花费很多的时间收集垃圾,请减小堆大小。一次完全的垃圾收集应该不超过3-5 秒。如果垃圾收集成为瓶颈,那么需要指定代的大小,检查垃圾收集的详细输出,研究垃圾收集参数对性能的影响。一般说来,你应该使用物理内存的 80% 作为堆大小。当增加处理器时,记得增加内存,因为分配可以并行进行,而垃圾收集不是并行的。 2、连接数优化: #优化连接数,主要是在conf/server.xml配置文件中进行修改。 2.1、优化线程数 找到Connectorport="8080" protocol="HTTP/1.1",增加maxThreads和acceptCount属性(使acceptCount大于等于maxThreads),如下: <Connectorport="8080" protocol="HTTP/1.1"connectionTimeout="20000" redirectPort="8443"acceptCount="500" maxThreads="400" /> • maxThreads:tomcat可用于请求处理的最大线程数,默认是200 • minSpareThreads:tomcat初始线程数,即最小空闲线程数 • maxSpareThreads:tomcat最大空闲线程数,超过的会被关闭 • acceptCount:当所有可以使用的处理请求的线程数都被使用时,可以放到处理队列中的请求数,超过这个数的请求将不予处理.默认100 2.2、使用线程池 在server.xml中增加executor节点,然后配置connector的executor属性,如下: <Executorname="tomcatThreadPool" namePrefix="req-exec-"maxThreads="1000" minSpareThreads="50"maxIdleTime="60000"/> <Connectorport="8080" protocol="HTTP/1.1"executor="tomcatThreadPool"/> • namePrefix:线程池中线程的命名前缀 • maxThreads:线程池的最大线程数 • minSpareThreads:线程池的最小空闲线程数 • maxIdleTime:超过最小空闲线程数时,多的线程会等待这个时间长度,然后关闭 • threadPriority:线程优先级 注:当tomcat并发用户量大的时候,单个jvm进程确实可能打开过多的文件句柄,这时会报java.net.SocketException:Too many open files错误。可使用下面步骤检查: • ps -ef |grep tomcat 查看tomcat的进程ID,记录ID号,假设进程ID为10001 • lsof -p 10001|wc -l 查看当前进程id为10001的 文件操作数 • 使用命令:ulimit -a 查看每个用户允许打开的最大文件数 3、Tomcat Connector三种运行模式(BIO, NIO, APR) 3.1、三种模式比较: 1)BIO:一个线程处理一个请求。缺点:并发量高时,线程数较多,浪费资源。Tomcat7或以下在Linux系统中默认使用这种方式。 2)NIO:利用Java的异步IO处理,可以通过少量的线程处理大量的请求。Tomcat8在Linux系统中默认使用这种方式。Tomcat7必须修改Connector配置来启动(conf/server.xml配置文件): <Connectorport="8080"protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000"redirectPort="8443"/> 3)APR(Apache Portable Runtime):从操作系统层面解决io阻塞问题。Linux如果安装了apr和native,Tomcat直接启动就支持apr。 3.2、apr模式 安装apr以及tomcat-native yum -y install apr apr-devel 进入tomcat/bin目录,比如: cd /opt/local/tomcat/bin/ tar xzfv tomcat-native.tar.gz cd tomcat-native-1.1.32-src/jni/native ./configure --with-apr=/usr/bin/apr-1-config make && make install #注意最新版本的tomcat自带tomcat-native.war.gz,不过其版本相对于yum安装的apr过高,configure的时候会报错。 解决:yum remove apr apr-devel –y,卸载yum安装的apr和apr-devel,下载最新版本的apr源码包,编译安装;或者下载低版本的tomcat-native编译安装 安装成功后还需要对tomcat设置环境变量,方法是在catalina.sh文件中增加1行: CATALINA_OPTS="-Djava.library.path=/usr/local/apr/lib" #apr下载地址:http://apr.apache.org/download.cgi #tomcat-native下载地址:http://tomcat.apache.org/download-native.cgi 修改8080端对应的conf/server.xml protocol="org.apache.coyote.http11.Http11AprProtocol" <Connector executor="tomcatThreadPool" port="8080" protocol="org.apache.coyote.http11.Http11AprProtocol" connectionTimeout="20000" enableLookups="false" redirectPort="8443" URIEncoding="UTF-8" /> PS:启动以后查看日志 显示如下表示开启 apr 模式 Sep 19, 2016 3:46:21 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-apr-8081"]
Grafana 是 Graphite 和 InfluxDB 仪表盘和图形编辑器。Grafana 是开源的,功能齐全的度量仪表盘和图形编辑器,支持 Graphite,InfluxDB 和 OpenTSDB。Grafana 主要特性:灵活丰富的图形化选项;可以混合多种风格;支持白天和夜间模式;多个数据源;Graphite 和 InfluxDB 查询编辑器等等。 Grafana安装 Linux上(CentOS,Fedora,OpenSuse,Redhat)安装Grafana源码包 1、可以使用yum直接安装Grafana yum install https://grafanarel.s3.amazonaws.com/builds/grafana-3.1.0-1468321182.x86_64.rpm 2、安装最新稳定版 #在CentOS、Redhat/Fedora:手动安装 yum install initscripts fontconfig rpm -Uvh grafana-3.1.0-1468321182.x86_64.rpm #在OpenSuse上安装: rpm -i --nodeps grafana-3.1.0-1468321182.x86_64.rpm 3、安装via yum仓库,配置grafana源 # cat /etc/yum.repos.d/grafana.repo [grafana] name=grafana baseurl=https://packagecloud.io/grafana/stable/el/6/$basearch repo_gpgcheck=1 enabled=1 gpgcheck=1 gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt #如果你想体验测试版本可以更换测试链接 baseurl=https://packagecloud.io/grafana/testing/el/6/$basearch #使用yum安装grafana yum install –y grafana #RPM GPG Key #这些RPMs是签名,可以用公共GPG密钥验证签名, #公共密钥下载:https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana 4、安装包详细信息 ► 二进制文件 /usr/sbin/grafana-server ► 服务管理脚本 /etc/init.d/grafana-server ► 安装默认文件 /etc/sysconfig/grafana-server ► 配置文件 /etc/grafana/grafana.ini ► 安装systemd服务(如果systemd可用 grafana-server.service ► 日志文件 /var/log/grafana/grafana.log ► 缺省配置指定一个数据库sqlite3 /var/lib/grafana/grafana.db 5、启动Grafana service grafana-server start #设置garfana-server开机自启 chkconfig grafana-server on #启动服务器(通过systemd) systemctl daemon-reload systemctl start grafana-server systemctl status grafana-server #设置开机自启systemd服务 systemctl enable grafana-server.service 6、环境变量文件 Systemd服务和daemon服务在后台运行时,都使用文件/etc/sysconfig/grafana-server来设置环境变量,可以通过修改garfana-server文件来设置日志目录等其他变量。 #默认日志文件:/var/log/grafana #数据库设置 #缺省配置指定一sqlite3数据库位于/var/lib/grafana/grafana.db。请在升级前备份这个数据库。还可以使用MySQL或Postgres Grafana数据库。 7、访问测试 #地址栏输入:http://10.1.1.103:3000/login #默认用户和密码:admin admin 安装garfana-zabbix插件 官方网站:https://github.com/alexanderzobnin/grafana-zabbix 官网wiki:http://docs.grafana-zabbix.org/installation/ 使用grafana-cli工具安装 #获取可用插件列表 grafana-cli plugins list-remote #安装zabbix插件 grafana-cli plugins install alexanderzobnin-zabbix-app #安装插件完成之后重启garfana服务 service grafana-server restart #使用grafana-zabbix-app源,其中包含最新版本的插件 cd /var/lib/grafana/plugins/ #克隆grafana-zabbix-app插件项目 git clone https://github.com/alexanderzobnin/grafana-zabbix-app #注:如果没有git,请先安装git yum –y install git # 插件安装完成重启garfana服务 service grafana-server restart #注:通过这种方式,可以很容器升级插件 cd /var/lib/grafana/plugins/grafana-zabbix-app git pull service grafana-server restart 使用源码包安装 #源码安装需要NodeJS,npm和Grunt支持 git clone https://github.com/alexanderzobnin/grafana-zabbix.git cd grafana-zabbix npm install npm install -g grunt-cli grunt #插件将建成dist/目录。然后你可以将它复制到你的grafana插件目录或在grafana配置文件中指定编译插件的路径 [plugin.zabbix] path = /home/your/clone/dir/grafana-zabbix/dist #如果需要更新,执行下面命令 git pull grunt #重启grafana服务 service grafana-server restart systemctl restart grafana-server 配置Grafana启用插件 #登录到grafana上,移动到grafana左侧面板的插件,选择应用程序选项卡,然后选择“配置”选项卡,打开Zabbix,启用插件。 #配置Zabbix数据源 #添加新数据源,打开侧面板Zabbix数据源,单击“添加数据源并选择从下拉列表Zabbix。 #注意红线标注的地方,Name自定义,Type选择Zabbix,Url填写访问zabbix-web的url,加上zabbix-api的php文件,Zabbix details用户名密码需要在Zabbix-web页面中设置,本文中用户名:gafana,密码:grafana,不想新建的话,可以使用Zabbix的初始用户.设置完成点击增加按钮,弹出下图: #本教程的Zabbix版本为Zabbix-3.0.3,详细配置教程请参考官方文档: http://docs.grafana-zabbix.org/installation/configuration/ #常见错误解决请参考:http://docs.grafana.org/installation/troubleshooting/ 开始使用Grafana-Zabbix 添加新的图形面板到仪表板 创建CPU负载图形 一张图表中添加多个监控项 #可以使用度量字段中的正则表达式生成大量的项目的图表。grafana使用JavaScript正则表达式来实现。例如,如果需要显示的CPU时间(用户、系统、iowait,等等)你可以使用正则表达式在项字段创建图: /CPU (?!idle).* time/ #使用正则表达式对不同主机的相同监控项进行比较,使用/.*/表示匹配全部,/^salt/匹配以salt开头的选项,以所有主机显示CPU system time为例: #创建一个图像显示MySQL查询数据的统计,选择组,主机,应用,使用/MySQL .* operations/匹配不同的操作 通过设置Max data points的值(设为50),来调整图形的显示效果,下图标红圈注的地方需要修改。 使用Singlestat和Gauges绘图 查看全部的图形效果图 保存创建的仪表板 grafana插件安装 #插件链接:https://github.com/grafana/grafana #安装Panel #使用grafana-cli工具在命令行下面安装Clock grafana-cli plugins install grafana-clock-panel #安装apps,Worldping grafana-cli plugins install raintank-worldping-app #安装Data source,以SimpleJson为例 grafana-cli plugins install grafana-simple-json-datasource #安装完成,提示重启grafana服务 /etc/init.d/grafana-server restart #插件使用及仪表板模板导入 #Worldping使用展示 #到此grafana-zabbix安装及使用完成。 官方地址:http://docs.grafana-zabbix.org 项目Demo:http://play.grafana.org/ 项目github:https://github.com/grafana/grafana
Linux系统中,我们经常会用man命令来帮助查看这个命令的具体用法,man是很强大的,但是英语不好的同学用man用起来可能不那么顺手,自然而然的就出现了cheat命令,cheat命令就是通过简单的实例告诉你一个命令的具体使用方法,它被创建的目的是帮助系统管理员记住常用的系统命令。 1、 Cheat介绍 cheat通过实例告诉使用者一些命令的具体使用方法。 2、 Cheat例子 例如当时想要知道tar命令具体是如何使用的,你可以使用下面命令查看: cheat tar #你会看到像下面一样的效果图 #查看哪些命令可以用cheat, cheat -l | less #可以看到常用的命令都可以使用cheat来查看具体使用例子 3、 cheat安装 #cheat命令需要python环境的支持,需要安装python和pip yum install python-pip –y pip install --upgrade pip pip install cheat #或者通过github安装 pip install docopt pygments appdirs git clone git@github.com:chrisallenlane/cheat.git cd cheat python setup.py install 4、 修改cheat备忘单 cheat还有一个好处就是你可以定义自己常用的备忘单,默认的只是一些最基础的例子。自定义的备忘录放到~/.cheat/目录下,当设置好编辑环境可以使用下面的命令进行编辑 cheat -e foo 如果新建的foo已经存在,会直接打开编写,不存在会创建然后编辑 5、 设置cheat使用的环境变量 root@saltstack-master[02:20:15]:~$cheat -v cheat 2.1.25 #设置一个cheat的保存路径 默认情况下,个人的cheat保存在其家目录下面的.cheat目录下,但是可以定义一个特定的目录环境,使其生效 export DEFAULT_CHEAT_DIR='/opt/cheats' #可以指定多个目录使其生效 export CHEATPATH="$CHEATPATH:/path/to/more/cheats" #使用命令cheat -d 查看定义好的cheat路径 root@saltstack-master[02:27:27]:~$cheat -d /opt/cheats #默认cheat保存路径已改变 /usr/lib/python2.6/site-packages/cheat/cheatsheets #cheat默认的常用命令保存路径 6、 开启语法高亮 #如果需要在自己备忘录开启语法高亮的话,可以用下面命令启用 export CHEATCOLORS=true 7、 查看实例 1、 dd命令 2、 du命令 3、 git命令 4、 svn命令 #更多实例查看就不一一演示。 8、 自定义cheat vim /opt/cheats/iostat cheat iostat #修改cheat默认的备忘录,补全自己常用的命令
源码搭建LNMP架构部署动态网站环境Nginx 简介Nginx是一款相当优秀的用于部署动态网站的服务程序,Nginx具有不错的稳定性、丰富的功能以及占用较少的系统资源等独特特性。Nginx ("engine x") 是一个高性能的 HTTP 和 反向代理 服务器。Nginx 是由 Igor Sysoev 为俄罗斯访问量第二的 Rambler.ru 站点开发的,第一个公开版本0.1.0发布于2004年10月4日。其将源代码以类BSD许可证的形式发布,因它的稳定性、丰富的功能集、示例配置文件和低系统资源的消耗而闻名。2011年6月1日,nginx 1.0.4发布。Nginx是一款轻量级的Web 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器,并在一个BSD-like 协议下发行。由俄罗斯的程序设计师Igor Sysoev所开发,供俄国大型的入口网站及搜索引擎Rambler(俄文:Рамблер)使用。其特点是占有内存少,并发能力强,事实上nginx的并发能力确实在同类型的网页服务器中表现较好,中国大陆使用nginx网站用户有:百度BWS、新浪、网易、腾讯等。通过部署Linux+Nginx+MYSQL+PHP这四种开源软件,便拥有了一个免费、高效、扩展性强、资源消耗低的LNMP动态网站架构了源码安装程序很多软件产品只会以源码包的方式发布,如果只会用RPM命令就只能去互联网大海洋中慢慢寻找到由第三方组织或黑客们编写的RPM软件包后才能安装程序了,并且源码程序的可移植性非常好,可以针对不同的系统架构而正确运行,但RPM软件包则必需严格符合限制使用的平台和架构后才能顺利安装,所以建议即便在工作中可以很舒服的用Yum仓库来安装服务程序,源码安装的流程也一定要记清:第一步 解压文件 源码包通常会使用tar工具归档然后用gunzip或bzip2进行压缩,后缀格式会分别为.tar.gz与tar.bz2 ,解压方法: [root@vdevops package]# tar zxvf filename.tar.gz [root@vdevops package]# tar jvvf filename.tar.bz2 tar zxvf filename.tar.gz -C FileDirectory 第2步,切换到解压后的目录: [root@vdevops ~]# cd FileDirectory第3步:准备编译工作: 在开始安装服务程序之前,先阅读readme文件,然后需要执行configure脚本,他会自动的对当前系统进行一系列的评估,如源文件、软件依赖性库、编译器、汇编器、连接器检查等等,如果有需求,还可以使用--prefix参数来指定程序的安装路径(很实用),而当脚本检查系统环境符合要求后,则会在当前目录下生成一个Makefile文件。 [root@vdevops ~]# ./configure --prefix=/usr/local/program configure之前可以先configure –help查看都有哪些参数,可以自定义编译之后的文件目录,通过--prefix=/opt/servername 第4步:生成安装程序: 刚刚生成的Makefile文件会保存有系统环境依赖关系和安装规则,接下来需要使用make命令来根据MakeFile文件提供的规则使用合适的SHELL来编译所有依赖的源码,然后make命令会生成一个最终可执行的安装程序。 [root@vdevops ~]# make第5步:安装服务程序: 如果在configure脚本阶段中没有使用--prefix参数,那么程序一般会被默认安装到/usr/local/bin目录中。 [root@vdevops ~]# make install第6步:清理临时文件(可选): [root@vdevops ~]# make clean卸载服务程序的命令(请不要随便执行!!!): [root@vdevops ~]# make uninstall 部署LNMP架构LNMP(即Linux+Nginx+MYSQL+PHP)是目前非常热门的动态网站部署架构,一般是指: Linux:如RHEL、Centos、Debian、Fedora、Ubuntu等系统。Nginx:高性能、低消耗的HTTP与反向代理服务程序。MYSQL:热门常用的数据库管理软件。PHP:一种能够在服务器端执行的嵌入HTML文档的脚本语言。Tengine:Tengine是由淘宝网发起的Web服务器项目。它在Nginx的基础上,针对大访问量网站的需求,添加了很多高级功能和特性。Tengine的性能和稳定性已经在大型的网站如淘宝网,天猫商城等得到了很好的检验。(可以这样理解:淘宝拿到了Nginx源代码之后,进行了功能的填充,优化等等,然后提交给Nginx官方,但是由于Nginx官方相应慢或者不响应,加上语言沟通的不顺畅,于是淘宝公司就自己打包,在遵循GPL的原则上进行二次开发,于是就出了现在的Tengine这个版本)。官网网站:http://tengine.taobao.org/ Nginx工作原理:对比apache的工作原理,对php文件处理过程的区别1:nginx是通过php-fpm这个服务来处理php文件2:apache是通过libphp5.so这个模块来处理php文件Nginx: Apache Apache的libphp5.so随着apache服务器一起运行,而Nginx和php-fpm是各自独立运行,所以在运行过程中,Nginx和php-fpm都需要分别启动!修改Nginx配置文件,启动nginx服务,修改php配置文件,启动php-fpm服务nginx相对于apache的优点: 轻量级,同样起web 服务,比apache 占用更少的内存及资源 ;高并发,nginx 处理请求是异步非阻塞的,而apache 则是阻塞型的,在高并发下nginx 能保持低资源低消耗高性能;高度模块化的设计,编写模块相对简单;社区活跃,各种高性能模块出品迅速。apache 相对于nginx 的优点: rewrite ,比nginx 的rewrite强大;模块超多,基本想到的都可以找到;少bug ,nginx 的bug 相对较多;超稳定 存在就是理由,一般来说,需要性能的web 服务,用nginx 。如果不需要性能只求稳定,那就apache 。nginx处理动态请求是鸡肋,一般动态请求要apache去做,nginx只适合静态和反向。 部署LNMP架构需要安装依赖包yum -y install make gcc gcc-c++ flex bison file libtool libtool-libs autoconf kernel-devel libjpeg libjpeg-devel libpng libpng-devel gd freetype freetype-devel libxml2 libxml2-devel zlib zlib-devel glib2 glib2-devel bzip2 bzip2-devel libevent ncurses ncurses-devel curl curl-devel e2fsprogs e2fsprogs-devel krb5-devel libidn libidn-devel openssl openssl-devel gettext gettext-devel ncurses-devel gmp-devel unzip libcap lsof 系统初始配置:yum update –y && yum -y install vim wget unzip lrzsz 关闭防火墙: [root@vdevops nginx-1.9.15]# service iptables stopiptables: Setting chains to policy ACCEPT: filter [ OK ]iptables: Flushing firewall rules: [ OK ]iptables: Unloading modules: [ OK ][root@vdevops nginx-1.9.15]# chkconfig iptables off禁用selinux[root@vdevops ~]# getenforce #查看selinux状态Enforcing[root@vdevops nginx-1.9.15]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux #永久关闭临时关闭(不用重启机器): setenforce 0安装nginxhttp://nginx.org/wget http://nginx.org/download/nginx-1.9.15.tar.gz -P /usr/local/src/ Mainline version 主线版本 Stable version 稳定版本 Legacy versions 老版本,遗产版本 所需依赖包:[root@vdevops ~]# yum -y install gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel pcre-devel zlib:Nginx提供gzip模块,需要zlib的支持 openssl:Nginx提供SSL的功能 wget http://nchc.dl.sourceforge.net/project/pcre/pcre/8.38/pcre-8.38.zip -P /usr/local/src/ nginx rewrite依赖于PCRE库 [root@vdevops src]# unzip pcre-8.38.zip -d /usr/local/ 创建Nginx运行用户:[root@vdevops ~]# groupadd nginx[root@vdevops ~]# useradd nginx -g nginx -M -s /sbin/nologincd /usr/local/src[root@vdevops src]# tar xvf nginx-1.9.15.tar.gzcd nginx-1.9.15[root@vdevops nginx-1.9.15]# ./configure --prefix=/opt/nginx --with-http_dav_module --with-http_stub_status_module --with-http_addition_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-pcre=/usr/local/pcre-8.38 --user=nginx --group=nginx 注:--with-http_dav_module #启用支持(增加PUT,DELETE,MKCOL:创建集合,COPY和MOVE方法) 默认关闭,需要编译开启 --with-http_stub_status_module #启用支持(获取Nginx上次启动以来的工作状态)--with-http_addition_module #启用支持(作为一个输出过滤器,支持不完全缓冲,分部分相应请求)--with-http_sub_module #启用支持(允许一些其他文本替换Nginx相应中的一些文本)--with-http_flv_module #启用支持(提供支持flv视频文件支持)--with-http_mp4_module #启用支持(提供支持mp4视频文件支持,提供伪流媒体服务端支持)--with-pcre=/usr/local/pcre-8.37 #需要注意,这里指的是源码,用#./configure --help |grep pcre查看帮助[root@vdevops nginx-1.9.15]# make -j 4 && make install-j 4 使用4个cpu进行编译,加快编译速度[root@vdevops nginx-1.9.15]# ll /opt/nginx/total 16drwxr-xr-x. 2 root root 4096 Apr 25 14:12 conf #Nginx相关配置文件 drwxr-xr-x. 2 root root 4096 Apr 25 14:12 html #网站根目录drwxr-xr-x. 2 root root 4096 Apr 25 14:12 logs #日志文件drwxr-xr-x. 2 root root 4096 Apr 25 14:12 sbin #Nginx启动脚本 配置Nginx支持php文件[root@vdevops nginx-1.9.15]# vim /opt/nginx/conf/nginx.conf 启动Nginx服务[root@vdevops nginx-1.9.15]# /opt/nginx/sbin/nginx优化nginx启动命令执行路径[root@vdevops init.d]# ln -s /opt/nginx/sbin/nginx /usr/local/sbin/[root@vdevops init.d]# vim /etc/init.d/nginx编辑nginx启动脚本 ! /bin/sh chkconfig: 2345 55 25 Description: Startup script for nginx webserver on Debian. Place in /etc/init.d and run 'update-rc.d -f nginx defaults', or use the appropriate command on your distro. For CentOS/Redhat run: 'chkconfig --add nginx' BEGIN INIT INFO Provides: nginx Required-Start: $all Required-Stop: $all Default-Start: 2 3 4 5 Default-Stop: 0 1 6 Short-Description: starts the nginx web server Description: starts nginx using start-stop-daemon END INIT INFO Author: licess website: http://lnmp.org PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binNAME=nginxNGINX_BIN=/opt/nginx/sbin/$NAMECONFIGFILE=/opt/nginx/conf/$NAME.confPIDFILE=/opt/nginx/logs/$NAME.pid case "$1" in start) echo -n "Starting $NAME... " if netstat -tnpl | grep -q nginx;then echo "$NAME (pid `pidof $NAME`) already running." exit 1 $NGINX_BIN -c $CONFIGFILE if [ "$?" != 0 ] ; then echo " failed" exit 1 echo " done" stop) echo -n "Stoping $NAME... " if ! netstat -tnpl | grep -q nginx; then echo "$NAME is not running." exit 1 $NGINX_BIN -s stop if [ "$?" != 0 ] ; then echo " failed. Use force-quit" exit 1 echo " done" status) if netstat -tnpl | grep -q nginx; then PID=`pidof nginx` echo "$NAME (pid $PID) is running..." echo "$NAME is stopped" exit 0 force-quit) echo -n "Terminating $NAME... " if ! netstat -tnpl | grep -q nginx; then echo "$NAME is not running." exit 1 kill `pidof $NAME` if [ "$?" != 0 ] ; then echo " failed" exit 1 echo " done" restart) $0 stop sleep 1 $0 start reload) echo -n "Reload service $NAME... " if netstat -tnpl | grep -q nginx; then $NGINX_BIN -s reload echo " done" echo "$NAME is not running, can't reload." exit 1 configtest) echo -n "Test $NAME configure files... " $NGINX_BIN -t echo "Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}" exit 1 esac设置nginx开机自启动[root@vdevops init.d]# chmod +x /etc/init.d/nginx [root@vdevops init.d]# chkconfig --add nginx[root@vdevops init.d]# chkconfig nginx on浏览器访问验证: 扩展:nginx维护命令[root@vdevops init.d]# nginx –t #检查配置文件是否有语法错误nginx: the configuration file /opt/nginx/conf/nginx.conf syntax is oknginx: configuration file /opt/nginx/conf/nginx.conf test is successful [root@vdevops init.d]# nginx –V #查看nginx配置参数nginx version: nginx/1.9.15built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) configure arguments: --prefix=/opt/nginx --with-http_dav_module --with-http_stub_status_module --with-http_addition_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-pcre=/usr/local/pcre-8.38 --user=nginx --group=nginx 注意:重新编译时,一定要查看以前的编译配置,只需要在原有的配置参数后添加新的参数即可。[root@vdevops init.d]# nginx -s reload #平滑重载nginx配置文件,不需要重启nginx服务测试nginx启动脚本 安装mysqlhttp://www.mysql.com/查看系统中是否已自带mysql相关[root@vdevops ~]# rpm -qa | grep mysqlmysql-libs-5.1.73-5.el6_6.x86_64删除自带的mysql相关[root@vdevops ~]# yum remove mysql –y下载最新的mysql源码包[root@vdevops init.d]# wget -c http://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.12.tar.gz -P /usr/local/src/新建mysql用户和mysql组[root@vdevops init.d]# groupadd -r mysql && useradd -r -g mysql -s /sbin/nologin -M mysql[root@vdevops ~]# cd /usr/local/src/[root@vdevops src]# md5sum mysql-5.7.12.tar.gz #[md5校验]af17ba16f1b21538c9de092651529f7c mysql-5.7.12.tar.gz [root@vdevops src]# tar zxvf mysql-5.7.12.tar.gz && cd mysql-5.7.12创建mysql安装目录和数据存放目录,虚拟机添加一块新的硬盘,创建分区/dev/sdb1,并分配所有空间[root@vdevops src]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks1310720 inodes, 5241198 blocks262059 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=4294967296160 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done This filesystem will be automatically checked every 20 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@vdevops src]# vgs VG #PV #LV #SN Attr VSize VFree vg_vdevops 1 2 0 wz--n- 19.80g 0 [root@vdevops src]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created[root@vdevops src]# vgextend vg_vdevops /dev/sdb1 Volume group "vg_vdevops" successfully extended[root@vdevops src]# lvcreate -n data -L 19G vg_vdevops Logical volume "data" created.[root@vdevops src]# mkfs.ext4 /dev/vg_vdevops/data mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks1310720 inodes, 5236736 blocks261836 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=4294967296160 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@vdevops src]# lvextend -L +1000M /dev/vg_vdevops/data Size of logical volume vg_vdevops/data changed from 19.00 GiB (4864 extents) to 19.98 GiB (5114 extents). Logical volume data successfully resized[root@vdevops src]# resize2fs /dev/vg_vdevops/data resize2fs 1.41.12 (17-May-2010)Resizing the filesystem on /dev/vg_vdevops/data to 5240832 (4k) blocks.The filesystem on /dev/vg_vdevops/data is now 5240832 blocks long.[root@vdevops src]# vgs VG #PV #LV #SN Attr VSize VFree vg_vdevops 2 3 0 wz--n- 39.79g 0[root@vdevops src]# mkdir /data[root@vdevops src]# mount /dev/vg_vdevops/data /data/[root@vdevops src]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/vg_vdevops-LogVol01 18G 1.5G 16G 9% / tmpfs 935M 0 935M 0% /dev/shm/dev/sda1 190M 37M 143M 21% /boot/dev/mapper/vg_vdevops-data 20G 44M 19G 1% /data 开机自动挂载data目录[root@vdevops src]# echo "/dev/sdb1 /data ext4 defaults 0 0" >> /etc/fstab注意:mysql-5.7.12安装时占用空间比较大,虚拟机环境下建议新添加一块硬盘,并加到同一个lvm组中,真是服务器不需要,按照下面步骤扩展lvm安装必须软件包建议使用网络yum源,mysql-5.7.12.tar.gz的编译对软件包的版本要求比较高,其中cmake的版本要不低于2.8 [root@vdevops src]# yum -y install gcc gcc-c++ autoconf automake zlib libxml ncurses-devel libtool-ltdl-devel* make 编译Cmake编译工具 [root@vdevops src]# rpm -qa | grep cmake[root@vdevops src]# wget -c http://git.typecodes.com/libs/ccpp/cmake-3.2.1.tar.gz[root@vdevops src]# tar zxvf cmake-3.2.1.tar.gz && cd cmake-3.2.1 && ./configure[root@vdevops cmake-3.2.1]# make && make install[root@vdevops cmake-3.2.1]# ln -s /usr/local/bin/cmake /usr/bin/cmake[root@vdevops cmake-3.2.1]# cmake --versioncmake version 3.2.1 CMake suite maintained and supported by Kitware (kitware.com/cmake). Boost库,一个开源可移植的C++库,是C++标准化进程的开发引擎之一。 [root@vdevops src]# wget -c http://liquidtelecom.dl.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz -P /usr/local/src/注:从mysql-5.7.5之后源码编译,必须编译boost库[root@vdevops src]# mkdir -p /data/mysql/data[root@vdevops src]# tar zxvf boost_1_59_0.tar.gz [root@vdevops src]# tar zxvf mysql-5.7.12.tar.gz[root@vdevops src]# mv boost_1_59_0 /data/boost bison:GUN分析生成器 [root@vdevops src]# tar zxvf bison-3.0.tar.gz && cd bison-3.0 && ./configure[root@vdevops src]# make && make install Mysql编译配置相关参数:[root@vdevops src]# cd mysql-5.7.12[root@vdevops mysql-5.7.12]# cmake -DCMAKE_INSTALL_PREFIX=/data/mysql -DMYSQL_DATADIR=/data/mysql/data -DSYSCONFDIR=/etc -DWITH_MYISAM_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_MEMORY_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DMYSQL_UNIX_ADDR=/data/mysql/mysql.sock -DMYSQL_TCP_PORT=3306 -DENABLED_LOCAL_INFILE=1 -DWITH_PARTITION_STORAGE_ENGINE=1 -DEXTRA_CHARSETS=all -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DDOWNLOAD_BOOST=1 -DWITH_BOOST=/data/boost[root@vdevops mysql-5.7.12]# grep processor /proc/cpuinfo | wc -l2[root@vdevops mysql-5.7.12]# make -j 2 && make install CMAKE_INSTALL_PREFIX:指定MySQL程序的安装目录,默认/usr/local/mysqlDEFAULT_CHARSET:指定服务器默认字符集,默认latin1DEFAULT_COLLATION:指定服务器默认的校对规则,默认latin1_general_ciENABLED_LOCAL_INFILE:指定是否允许本地执行LOAD DATA INFILE,默认OFFWITH_COMMENT:指定编译备注信息WITH_xxx_STORAGE_ENGINE:指定静态编译到mysql的存储引擎,MyISAM,MERGE,MEMBER以及CSV四种引擎默认即被编译至服务器,不需要特别指定。WITHOUT_xxx_STORAGE_ENGINE:指定不编译的存储引擎SYSCONFDIR:初始化参数文件目录MYSQL_DATADIR:数据文件目录MYSQL_TCP_PORT:服务端口号,默认3306MYSQL_UNIX_ADDR:socket文件路径,默认/tmp/mysql.sock看到下图代表已经编译安装好了mysql-5.7.12 编译完成后,建议到/data/mysql/support-files/下面查看相关配置文件。[root@vdevops ~]# cd /data/mysql/support-files/[root@vdevops support-files]# lltotal 28-rw-r--r--. 1 mysql mysql 773 Mar 29 02:06 magic-rw-r--r--. 1 mysql mysql 1126 Apr 25 17:58 my-default.cnf-rwxr-xr-x. 1 mysql mysql 1061 Apr 25 17:58 mysqld_multi.server-rwxr-xr-x. 1 mysql mysql 869 Apr 25 17:58 mysql-log-rotate-rwxr-xr-x. 1 mysql mysql 10945 Apr 25 17:58 mysql.server查看编译成功之后的MySQL安装使用如下两条命令,查看MySQL的安装目录 /usr/local/mysql/ 下面是否生成了相关目录文件(最重要的当然是bin、sbin和lib目录)。如果lib目录下面没有生成如图所示的.so动态库文件和.a静态库文件,那么说明安装不成功(即使成功了也可能会导致php进程无法找到mysql的相关库文件) 开始设置MySQL的配置文件my.cnf先把编译生成的my.cnf文件备份,然后把自己之前整理过的mysql配置文件,上传到当前服务器的/etc目录下即可。 如果默认my.cnf不存在,需要手动新建[root@vdevops etc]# vim /etc/my.cnf添加以下配置内容,相关参数可根据实际情况情绪调整。 For advice on how to change settings please see http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html * DO NOT EDIT THIS FILE. It's a template which will be copied to the * default location during install, and will be replaced if you * upgrade to a newer version of MySQL. [client]port=3306socket=/data/mysql/mysql.sock [mysqld] Remove leading # and set to the amount of RAM for the most important data cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. innodb_buffer_pool_size = 128M Remove leading # to turn on a very important data integrity option: logging changes to the binary log between backups. log_bin These are commonly set, remove the # and set as required. user = mysqlbasedir = /data/mysqldatadir = /data/mysql/dataport=3306server-id = 1socket=/data/mysql/mysql.sock character-set-server = utf8log-error = /data/mysql/error.logpid-file = /data/mysql/mysql.pidgeneral_log = 1skip-name-resolve skip-networking back_log = 300 max_connections = 1000max_connect_errors = 6000open_files_limit = 65535table_open_cache = 128 max_allowed_packet = 4Mbinlog_cache_size = 1Mmax_heap_table_size = 8Mtmp_table_size = 16M read_buffer_size = 2Mread_rnd_buffer_size = 8Msort_buffer_size = 8Mjoin_buffer_size = 28Mkey_buffer_size = 4M thread_cache_size = 8 query_cache_type = 1query_cache_size = 8Mquery_cache_limit = 2M ft_min_word_len = 4 log_bin = mysql-binbinlog_format = mixedexpire_logs_days = 30 performance_schema = 0explicit_defaults_for_timestamp lower_case_table_names = 1 myisam_sort_buffer_size = 8Mmyisam_repair_threads = 1 interactive_timeout = 28800wait_timeout = 28800 Remove leading # to set options mainly useful for reporting servers. The server defaults are faster for transactions and fast SELECTs. Adjust sizes as needed, experiment to find the optimal values. join_buffer_size = 128M sort_buffer_size = 2M read_rnd_buffer_size = 2M Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 Recommended in standard MySQL setup sql_mode=NO_ENGINE_SUBSTITUTION,NO_AUTO_CREATE_USER,STRICT_TRANS_TABLES [mysqldump]quickmax_allowed_packet = 16M [myisamchk]key_buffer_size = 8Msort_buffer_size = 8Mread_buffer = 4Mwrite_buffer = 4M 添加mysql的环境变量将MySQL生成的bin目录添加到当前Linux系统的环境变量中[root@vdevops etc]# echo -e 'nnexport PATH=/data/mysql/bin:$PATHn' >> /etc/profile && source /etc/profile[root@vdevops etc]# cat /etc/profile能看到添加MySQL环境变量已经成功 修改MySQL数据库文件存放路径权限以及相关安全配置文中前面已经创建过/data/mysql/data,用于存放MySQL的数据库文件,同时设置其用户和用户组为之前创建的mysq,权限700,这样其他用户无法进行读写,尽量保证数据库的安全。[root@vdevops ~]# chown -R mysql:mysql /data/mysql[root@vdevops ~]# chmod -R go-rwx /data/mysql/data查看相关目录权限已经设置OK 初始化MySQL自身的数据库在MySQL安装目录的 bin 路径下,执行mysqld命令,初始化MySQL自身的数据库。 参数user表示用户,basedir表示mysql的安装路径,datadir表示数据库文件存放路径 [root@vdevops ~]# /data/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/data/mysql --basedir=/data/mysql/data 注:查看error.log,ll -rt /data/mysql/data/ 确保MySQL数据库初始化成功,否侧后面启动MySQL服务会报错。设置MySQL日志文件存放目录以及设置开机自动默认配置文件中都已经设置OK,如需更改建议放到/var/log下面 [root@vdevops ~]# mkdir -p /var/run/mysql && mkdir -p /var/log/mysql [root@vdevops ~]# chown -R mysql:mysql /var/log/mysql && chown -R mysql:mysql /var/run/mysql 本文中默认全部放到mysql目录下面,便于管理。 设置开机自启动 [root@vdevops ~]# cp /data/mysql/support-files/mysql.server /etc/init.d/mysqld[root@vdevops ~]# chmod +x /etc/init.d/mysqld [root@vdevops ~]# chkconfig mysqld on启动MySQL服务完成上面的操作之后,就可以正式启动MySQL服务,启动MySQL进程服务的命令如下:[root@vdevops ~]# mysqld_safe --user=mysql --datadir=/data/mysql/data/ --log-error=/data/mysql/error.log & 启动报错,查看error.log,分析相关原因,此次报错问题在于mysql目录权限赋予的不正确,导致MySQL数据库初始化失败。[root@vdevops ~]# ntpdate time.nist.gov 5 May 11:02:31 ntpdate[16405]: step time server 216.229.0.179 offset 791817.941700 sec[root@vdevops ~]# /etc/init.d/mysqld startStarting MySQL SUCCESS!然后使用下面这2个命令查看MySQL服务进程和端口监听情况:[root@vdevops ~]# ps -ef | grep mysql[root@vdevops ~]# netstat -nlpt | grep 3306 初始化MySQL数据库的root用户密码和Oracle数据库一样,MySQL数据库也默认自带了一个 root 用户(这个和当前Linux主机上的root用户是完全不搭边的),我们在设置好MySQL数据库的安全配置后初始化root用户的密码。配制过程中,一路输入 y 就行了。这里只说明下MySQL5.7.7rc版本中,用户密码策略分成低级 LOW 、中等 MEDIUM 和超强 STRONG三种,推荐使用中等 MEDIUM 级别! LOW:【只需要长度大于或等于8】 MEDIUM:【还需要包含数字、大小写和类似于@#%等特殊字符】 STRONG:【还需要包含字典文件】 Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y By default, a MySQL installation has an anonymous user,allowing anyone to log into MySQL without having to havea user account created for them. This is intended only fortesting, and to make the installation go a bit smoother.You should remove them before moving into a productionenvironment. Remove anonymous users? (Press y|Y for Yes, any other key for No) : ySuccess. 删除匿名用户 Normally, root should only be allowed to connect from'localhost'. This ensures that someone cannot guess atthe root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : ySuccess. 禁止root用户登录 By default, MySQL comes with a database named 'test' thatanyone can access. This is also intended only for testing,and should be removed before moving into a productionenvironment. Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y Dropping test database...Success. 删除测试库 Removing privileges on test database...Success. Reloading the privilege tables will ensure that all changesmade so far will take effect immediately.Reload privilege tables now? (Press y|Y for Yes, any other key for No) : ySuccess. All done!登录测试 将ySQL数据库的动态链接库共享至系统链接库通常MySQL数据库还会被类似于PHP等服务调用,因此我们需要将MySQL编译后的lib库文件添加到当前Linux主机链接库/etc/ld.so.conf.d/下,这样MySQL服务就可以被其他服务调用了。[root@vdevops ~]# echo "/data/mysql/lib" > /etc/ld.so.conf.d/mysql.conf[root@vdevops ~]# ldconfig #使生效[root@vdevops ~]# ldconfig -v | grep mysql #查看效果ldconfig: /etc/ld.so.conf.d/kernel-2.6.32-573.el6.x86_64.conf:6: duplicate hwcap 1 nosegneg/data/mysql/lib: libmysqlclient.so.20 -> libmysqlclient.so.20.2.1 创建其他MySQL数据库用户使用MySQL数据库root管理员用户登录MySQL数据库后,可以管理数据库和其他用户。这里创建一个名为vdevops的MySQL用户(密码为:@Vdevops1217.com)和名为vdevops的数据库。[root@vdevops ~]# mysql -u root -p (初始化数据库用户时设置的密码)mysql> CREATE DATABASE vdevops DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;Query OK, 1 row affected (0.01 sec)mysql> grant all privileges on vdevops.* to vdevops@localhost identified by '@Vdevops1217.com';Query OK, 0 rows affected, 2 warnings (0.06 sec) mysql> SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user; query User: 'mysql.sys'@'localhost'; User: 'root'@'localhost'; User: 'vdevops'@'localhost'; 3 rows in set (0.00 sec)mysql> flush privileges;Query OK, 0 rows affected (0.02 sec) MySQL编译安装时常见错误分析1 没有安装MySQL所需要的boost库 测试发现编译MySQL5.7以及更高的版本时,都需要下载并引用或者直接安装boost库,否则在执行cmake命令时会报如下错误: -- Running cmake version 3.2.1-- Configuring with MAX_INDEXES = 64U-- SIZEOF_VOIDP 8-- MySQL 5.7.6-m16 [MySQL版本]-- Packaging as: mysql-5.7.6-m16-Linux-x86_64-- Looked for boost/version.hpp in and -- BOOST_INCLUDE_DIR BOOST_INCLUDE_DIR-NOTFOUND-- LOCAL_BOOST_DIR -- LOCAL_BOOST_ZIP -- Could not find (the correct version of) boost. [关键错误信息]-- MySQL currently requires boost_1_57_0 [解决办法] CMake Error at cmake/boost.cmake:76 (MESSAGE): [具体错误和解决方法] You can download it with -DDOWNLOAD_BOOST=1 -DWITH_BOOST= This CMake script will look for boost in . If it is not there, it will download and unpack it (in that directory) for you. If you are inside a firewall, you may need to use an http proxy: export http_proxy=http://example.com:80 Call Stack (most recent call first): cmake/boost.cmake:228 (COULD_NOT_FIND_BOOST) CMakeLists.txt:452 (INCLUDE) -- Configuring incomplete, errors occurred!See also "/mydata/mysql-5.7.6-m16/CMakeFiles/CMakeOutput.log".解决方法:直接按照前文《2015博客升级记(四):CentOS 7.1编译安装MySQL5.7.7rc》小节2中的方法安装Boost库即可。或者先下载Boost库,然后通过在cmake命令后面添加参数-DDOWNLOAD_BOOST=1 -DWITH_BOOST=Boost库路径即可。 2 执行cmake时缺少Ncurses库的支持 Ncurses提供功能键定义(快捷键),屏幕绘制以及基于文本终端的图形互动功能的动态库。 [root@typecodes ~]# yum -y install ncurses-devel -- Could NOT find Curses (missing: CURSES_LIBRARY CURSES_INCLUDE_PATH) CMake Error at cmake/readline.cmake:64 (MESSAGE): Curses library not found. Please install appropriate package, remove CMakeCache.txt and rerun cmake.On Debian/Ubuntu, package name is libncurses5-dev, on Redhat and derivates it is ncurses-devel. Call Stack (most recent call first): cmake/readline.cmake:107 (FIND_CURSES) cmake/readline.cmake:181 (MYSQL_USE_BUNDLED_EDITLINE) CMakeLists.txt:480 (MYSQL_CHECK_EDITLINE) -- Configuring incomplete, errors occurred!See also "/mydata/mysql-5.7.6-m16/CMakeFiles/CMakeOutput.log".See also "/mydata/mysql-5.7.6-m16/CMakeFiles/CMakeError.log".解决方法:直接执行命令yum -y install ncurses-devel安装Ncurses即可。 3 安装MySQL完后,无法正常启动服务 在安装完MySQL后,执行命令service mysqld start失败,也即无法正常启动MySQL服务。 无法正常启动MySQL服务 解决方法:主要通过命令systemctl status mysqld.service和MySQL的日志来分析。如上图所示,在日志文件/var/log/mysql/error.log中可以看到具体的ERROR信息:Could not create unix socket lock file /var/run/mysql/mysql.sock.lock。这种错误一般都是目录不存在或者权限不足,所以我们直接使用命令mkdir -p /var/log/mysql/创建该目录即可,然后可以设置目录权限chown -R mysql:mysql /var/log/mysql/。 4 操作MySQL时,报错You must SET PASSWORD before executing this statement 用MySQL的root用户登录数据库后,如果之前没有设置密码,那么执行任何操作命令时,会提示如下错误信息。 mysql> CREATE DATABASE testmysqldatabase DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;ERROR 1820 (HY000): You must SET PASSWORD before executing this statement常规的使用MySQL安全模式的解决方法如下,但是在MySQL5.7以及更高版本下是行不通的。 [root@typecodes ~]# service mysqld stopShutting down MySQL..[ OK ][root@typecodes ~]# /mydata/mysql/bin/mysqld_safe --user=mysql --skip-networking --skip-grant-tables &[1] 3688[root@typecodes ~]# 150409 23:02:02 mysqld_safe Logging to '/var/log/mysql/error.log'.150409 23:02:02 mysqld_safe Starting mysqld daemon with databases from /mydata/mysql/data 重新登录mysql后,设置root密码 mysql> set password='this is a password sample';ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement有效的解决方法: [root@typecodes ~]# mysql -u root -p [使用root用户登录]Enter password: [无密码,直接回车]Welcome to the MySQL monitor. Commands end with ; or g.Your MySQL connection id is 3Server version: 5.7.6-m16 Oracle is a registered trademark of Oracle Corporation and/or its Other names may be trademarks of their respectiveowners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> select * from mysql.user;ERROR 1820 (HY000): You must SET PASSWORD before executing this statementmysql> set password='this is a password sample';ERROR 1819 (HY000): Your password does not satisfy the current policy requirements 设置当前root用户密码 mysql> set password='your password';Query OK, 0 rows affected (0.00 sec) mysql> flush privileges;Query OK, 0 rows affected (0.00 sec)需要说明的是,修改用户密码的SQL语句在不同的MySQL版本中是不同的。下面这3种是MySQL5.5以下的版本的修改方法,但是不适用于MySQL5.7以及更高版本。 mysql> update mysql.user set PASSWORD='your password' where User='root'; mysql> SET PASSWORD for root@'localhost' = PASSWORD('your password'); mysql> SET PASSWORD = PASSWORD('your password');到此MySQL-5.7.12编译安装全部完成。安装PHP在Nginx中,我们使用的是php-fpm来对php页面解析,PHP-FPM其实是PHP源代码的一个补丁,指在将FastCGI进程管理整合进PHP包中。必须将它patch到你的PHP源代码中,再编译安装PHP后才可以使用 从PHP5.3.3开始,PHP中直接整合了PHP-FPM,所以从PHP5.3.3版本以后,不需要下载PHP-FPM补丁包了,下面是PHP-FPM官方发出来的通知: http://php-fpm.org/download http://php.net/downloads.php php-7.0.6.tar.gz (sig) [17,781Kb] md5:c9e2ff2d6f843a584179ce96e63e38f9sha256:f6b47cb3e02530d96787ae5c7888aefbd1db6ae4164d68b88808ee6f4da94277可自行校验wget的安装包是否完整http://cn2.php.net/distributions/php-7.0.6.tar.gz安装依赖关系依赖包下载地址https://curl.haxx.se/download/curl-7.48.0.tar.gzhttp://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.14.tar.gzhttp://iweb.dl.sourceforge.net/project/mcrypt/Libmcrypt/2.5.8/libmcrypt-2.5.8.tar.gzhttp://iweb.dl.sourceforge.net/project/mhash/mhash/0.9.9.9/mhash-0.9.9.9.tar.gz http://iweb.dl.sourceforge.net/project/mcrypt/MCrypt/2.6.8/mcrypt-2.6.8.tar.gz[root@vdevops ~]# wget -c http://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.14.tar.gz -P /usr/local/src/[root@vdevops ~]# wget -c http://iweb.dl.sourceforge.net/project/mcrypt/Libmcrypt/2.5.8/libmcrypt-2.5.8.tar.gz -P /usr/local/src/[root@vdevops ~]# wget -c http://iweb.dl.sourceforge.net/project/mcrypt/MCrypt/2.6.8/mcrypt-2.6.8.tar.gz -P /usr/local/src/[root@vdevops ~]# wget -c http://iweb.dl.sourceforge.net/project/mhash/mhash/0.9.9.9/mhash-0.9.9.9.tar.gz -P /usr/local/src/ libiconv库为需要做转换的应用提供了一个iconv()的函数,以实现一个字符编码到另一个字符编码的转换。 错误提示:configure: error: Please reinstall the iconv library.[root@vdevops ~]# cd /usr/local/src/[root@vdevops src]# tar zxvf libiconv-1.14.tar.gz && cd libiconv-1.14[root@vdevops libiconv-1.14]# ./configure --prefix=/usr/local/libiconv[root@vdevops libiconv-1.14]# make -j 2 && make installlibmcrypt是加密算法扩展库。 错误提示:configure: error: Cannot find imap library (libc-client.a). Please check your c-client installation.[root@vdevops ~]# cd /usr/local/src/[root@vdevops src]# tar zxvf libmcrypt-2.5.8.tar.gz && cd libmcrypt-2.5.8[root@vdevops libmcrypt-2.5.8]# ./configure && make -j 2 && make installMhash是基于离散数学原理的不可逆向的php加密方式扩展库,其在默认情况下不开启。 mhash的可以用于创建校验数值,消息摘要,消息认证码,以及无需原文的关键信息保存 错误提示:configure: error: “You need at least libmhash 0.8.15 to compile this program. http://mhash.sf.net/”[root@vdevops src]# tar zxvf mhash-0.9.9.9.tar.gz && cd mhash-0.9.9.9[root@vdevops mhash-0.9.9.9]# ./configure && make -j 2 && make installmcrypt 是 php 里面重要的加密支持扩展库,Mcrypt扩展库可以实现加密解密功能,就是既能将明文加密,也可以密文还原。[root@vdevops src]# tar zxvf mcrypt-2.6.8.tar.gz && cd mcrypt-2.6.8[root@vdevops mcrypt-2.6.8]# ./configure && make -j 2 && make installchecking for libmcrypt-config... nochecking for libmcrypt - version >= 2.5.0... no* Could not run libmcrypt test program, checking why...* The test program failed to compile or link. See the file config.log for the* exact error that occured. This usually means LIBMCRYPT was incorrectly installed* or that you have moved LIBMCRYPT since it was installed. In the latter case, you* may want to edit the libmcrypt-config script: noconfigure: error: * libmcrypt was not found解决办法:gcc编译的时候根据自身定义的变量寻找相关函数库等文件,libmcrypt也是刚安装的,在变量中没有定义出来,所以手动添加:export LD_LIBRARY_PATH=/usr/local/lib:LD_LIBRARY_PATH[root@vdevops etc]# echo -e 'nnexport LD_LIBRARY_PATH=/usr/local/lib:LD_LIBRARY_PATHn' >> /etc/profile && source /etc/profile[root@vdevops ~]# yum -y install php-pearpear按照一定的分类来管理pear应用代码库,你的pear代码可以组织到其中适当的目录中,其他人可以方便的检索并分享到你的成果;pear不仅仅是一个代码仓库,它同时也是一个标准,使用这个标准来书写你的php代码,将会增强你的程序的可读性,复用性,减少出错的几率;Pear通过两个类为你搭建了一个框架,实现了诸如析构函数,错误捕获功能,你通过继承就可以使用这些功能.编译安装php[root@vdevops ~]# cd /usr/local/src/[root@vdevops php-7.0.6]# ./configure --prefix=/opt/php --with-config-file-path=/opt/php/ --enable-fpm --with-mysql=mysqlnd --with-mysqli=mysqlnd --with-pdo-mysql=mysqlnd --with-iconv-dir --with-freetype-dir --with-jpeg-dir --with-png-dir --with-zlib --with-libxml-dir=/usr --enable-xml --disable-rpath --enable-bcmath --enable-shmop --enable-sysvsem --enable-inline-optimization --with-curl --enable-mbregex --enable-mbstring --with-mcrypt --enable-ftp --with-gd --enable-gd-native-ttf --with-openssl --with-mhash --enable-pcntl --enable-sockets --with-xmlrpc --enable-zip --enable-soap --without-pear --with-gettext --disable-fileinfo --enable-maintainer-zts报错: [root@vdevops php-7.0.6]# rpm -qa | grep curlcurl-7.19.7-46.el6.x86_64python-pycurl-7.19.0-8.el6.x86_64libcurl-7.19.7-46.el6.x86_64[root@vdevops php-7.0.6]# rpm -e --nodeps curl-7.19.7-46.el6.x86_64[root@vdevops libmcrypt-2.5.8]# wget -c https://curl.haxx.se/download/curl-7.48.0.tar.gz -P /usr/local/src/[root@vdevops ~]# cd /usr/local/src/[root@vdevops src]# tar zxvf curl-7.48.0.tar.gz && cd curl-7.48.0[root@vdevops curl-7.48.0]# ./configure && make -j 2 && make install 继续编译php,报下面错误解决:[root@vdevops php-7.0.6]# yum install libjpeg-devel –y 解决:[root@vdevops php-7.0.6]# yum install libpng-devel –y 解决:[root@vdevops php-7.0.6]# yum install freetype-devel –y 出现上图界面,编译php中./configure完成,然后[root@vdevops php-7.0.6]# make -j 2 && make install 注:--with-config-file-path #设置 php.ini 的搜索路径。默认为 PREFIX/lib--with-mysql #mysql安装目录,对mysql的支持--with-mysqli #mysqli扩展技术不仅可以调用MySQL的存储过程、处理MySQL事务,而且还可以使访问数据库工作变得更加稳定。是一个数据库驱动--with-iconv-dir #种字符集间的转换--with-freetype-dir #打开对freetype字体库的支持 --with-jpeg-dir #打开对jpeg图片的支持 --with-png-dir #打开对png图片的支持--with-zlib #打开zlib库的支持,实现GZIP压缩输出 --with-libxml-dir=/usr #打开libxml2库的支持,libxml是一个用来解析XML文档的函数库--enable-xml #支持xml文档--disable-rpath #关闭额外的运行库文件--enable-bcmath #打开图片大小调整,用到zabbix监控的时候用到了这个模块--enable-shmop #shmop共享内存操作函数,可以与c/c++通讯--enable-sysvsem #加上上面shmop,这样就使得你的PHP系统可以处理相关的IPC函数(活动在内核级别)。--enable-inline-optimization #优化线程--with-curl #打开curl浏览工具的支持 --with-curlwrappers #运用curl工具打开url流 ,新版PHP5.6已弃用--enable-mbregex #支持多字节正则表达式--enable-fpm #CGI方式安装的启动程序,PHP-FPM服务--enable-mbstring #多字节,字符串的支持--with-gd #打开gd库的支持,是php处理图形的扩展库,GD库提供了一系列用来处理图片的API,使用GD库可以处理图片,或者生成图片。--enable-gd-native-ttf #支持TrueType字符串函数库--with-openssl #打开ssl支持--with-mhash #支持mhash算法扩展--enable-pcntl #freeTDS需要用到的,pcntl扩展可以支持php的多线程操作--enable-sockets #打开 sockets 支持--with-xmlrpc #打开xml-rpc的c语言--enable-zip #打开对zip的支持--enable-soap #扩展库通过soap协议实现了客服端与服务器端的数据交互操作--with-mcrypt #mcrypt算法扩展 mysqldnd即mysql native driver简写,即是由PHP源码提供的mysql驱动连接代码.它的目的是代替旧的libmysql驱动.PDO是一个应用层抽象类,底层和mysql server连接交互需要mysql驱动的支持. 也就是说无论你使用了何种驱动,都可以使用PDO. PDO是提供了PHP应用程序层API接口,而mysqlnd, libmysql则负责与mysql server进行网络协议交互(它并不提供php应用程序层API功能)3. 为何要使用mysqlnd驱动?PHP官方手册描述:A.libmysql驱动是由mysql AB公司(现在是oracle公司)编写, 并按mysql license许可协议发布,所以在PHP中默认是被禁用的.而mysqlnd是由php官方开发的驱动,以php license许可协议发布,故就规避了许可协议和版权的问题B.因为mysqlnd内置于PHP源代码,故你在编译安装php时就不需要预先安装mysql server也可以提供mysql client API (mysql_connect, pdo , mysqli), 这将减化一些工作量.C. mysqlnd是专门为php优化编写的驱动,它使用了PHP本身的特性,在内存管理,性能上比libmysql更有优势. php官方的测试是:libmysql将每条记录在内存中保存了两份,而mysqlnd只保存了一份D. 一些新的或增强的功能增强的持久连接引入特有的函数mysqli_fetch_all()引入一些性能统计函数mysqli_get_cache_stats(), mysqli_get_client_stats(), mysqli_get_connection_stats(),使用上述函数,可很容易分析mysql查询的性能瓶颈!SSL支持(从php 5.3.3开始有效)压缩协议支持命名管道支持(php 5.4.0开始有效) 修改fpm配置php-fpm.conf.default文件名称 [root@vdevops php-7.0.6]# cp /opt/php/etc/php-fpm.conf.default /opt/php/etc/php-fpm.conf 复制php.ini配置文件 [root@vdevops php-7.0.6]# cp /usr/local/src/php-7.0.6/php.ini-production /opt/php/php.ini 复制php-fpm启动脚本到init.d [root@vdevops php-7.0.6]# cp /usr/local/src/php-7.0.6/sapi/fpm/init.d.php-fpm /etc/rc.d/init.d/php-fpm 设置php-fpm开机自启动 [root@vdevops php-7.0.6]# chmod +x /etc/init.d/php-fpm [root@vdevops php-7.0.6]# chkconfig --add php-fpm[root@vdevops php-7.0.6]# chkconfig php-fpm on[root@vdevops php-7.0.6]# /etc/init.d/php-fpm startStarting php-fpm [05-May-2016 15:54:35] WARNING: Nothing matches the include pattern '/opt/php/etc/php-fpm.d/*.conf' from /opt/php/etc/php-fpm.conf at line 125.[05-May-2016 15:54:35] ERROR: No pool defined. at least one pool section must be specified in config file[05-May-2016 15:54:35] ERROR: failed to post process the configuration[05-May-2016 15:54:35] ERROR: FPM initialization failed解决:[root@vdevops php-7.0.6]# cp /opt/php/etc/php-fpm.d/www.conf.default /opt/php/etc/php-fpm.d/www.conf 查看端口监听状态:[root@vdevops php-7.0.6]# netstat -nlpt | grep php-fpm 验证php测试页:[root@vdevops ~]# cd /opt/nginx/html/[root@vdevops html]# vim phpinfo.php[root@vdevops html]# cat phpinfo.php <?php phpinfo(); ?>浏览器输入:http://10.1.1.92/phpinfo.php 到此源码安装LNMP架构完成。
不要命的芒果 · 使用java上传文件到Minio - 简书 1 年前 |