本文默認讀者已經對Docker有一定了解,且清楚使用Docker進行部署的優勢。
官網:https://docs.docker.com/docker-for-mac/install/
地址:https://store.docker.com/editions/community/docker-ce-desktop-mac
image.png
image.png
Mac頂部狀態欄會出現鯨魚圖標
image.png
image.png
$ docker --version
Docker version 18.03, build c97c6d6
$ docker-compose --version
docker-compose version 1.21.2, build 8dd22a9
$ docker-machine --version
docker-machine version 0.14.0, build 9ba6da9
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
$ docker run -d -p 80:80 --name webserver nginx
image.png
常用命令:
docker ps 查看正在運行的容器
docker stop停止正在運行的容器
docker start啟動容器
docker ps -a查看終止狀態的容器
docker rm -f webserver命令來移除正在運行的容器
docker list 列出本地鏡像
docker rmi 刪除的鏡像
Docker Store 地址:https://store.docker.com/images/nginx
其實在上文中Hello World即已經安裝了nginx。
docker run --name mynginx -p 80:80 -v /Users/gaoguangchao/Work/opt/local/nginx/logs:/var/log/nginx -v /Users/gaoguangchao/Work/opt/local/nginx/conf.d:/etc/nginx/conf.d -v /Users/gaoguangchao/Work/opt/local/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -v /Users/gaoguangchao/Work/opt/local/nginx/html:/etc/nginx/html -d nginx
-d 以守護進程運行(運行在後臺)
--name nginx 容器名稱;
-p 80:80 埠映射
-v 配置掛載路徑 宿主機路徑:容器內的路徑
為了能直接修改配置文件,以實現對Nginx的定製化,需要進行Docker的相關目錄掛在宿主機上。
需要掛載的目錄/文件:/etc/nginx/conf.d /etc/nginx/nginx.conf /etc/nginx/html
有一點尤其需要注意,當掛載的為文件而非目錄時,需要注意以下兩點:
a. 掛載文件命令: -v 宿主機路徑:容器內的路徑:ro
b.宿主機需要先創建後文件,無法自動創建,反之將報錯
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream demo {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name request_log;
location / {
root html;
proxy_connect_timeout 3;
proxy_send_timeout 30;
proxy_read_timeout 30;
proxy_pass http://demo;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
image.png
在調試過程中往往不會很順利,這裡的技巧是通過閱讀error.log中的異常日誌進行
此處是本機啟動一個 SpringBoot web server,埠為:8080,瀏覽器訪問:http://localhost:8080/index/hello
image.png
按照上節中nginx.conf示例中的配置方式,增加upstream、server、proxy_pass相關配置,對80埠進行監聽,重啟nginx容器。
瀏覽器訪問:http://localhost/index/hello,可以看到正常訪問。
Openresty是在Nginx基礎上做了大量的定製擴展,其安裝過程和Nginx基本一致。
Docker Store 地址:https://store.docker.com/community/images/openresty/openresty
docker pull openresty/openresty
docker run -d --name="openresty" -p 80:80 -v /Users/gaoguangchao/Work/opt/local/openresty/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf:ro -v /Users/gaoguangchao/Work/opt/local/openresty/logs:/usr/local/openresty/nginx/logs -v /Users/gaoguangchao/Work/opt/local/openresty/conf.d:/etc/nginx/conf.d -v /Users/gaoguangchao/Work/opt/local/openresty/html:/etc/nginx/html openresty/openresty
注意事項和安裝Nginx基本一致,在此不再贅述。
Docker Store 地址:https://store.docker.com/community/images/spotify/kafka
docker pull spotify/kafka
運行命令:
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`127.0.0.1` --env ADVERTISED_PORT=9092 spotify/kafka
2181為zookeeper埠,9092為kafka埠
輸出啟動日誌:
image.png
可以使用一些可視化客戶端連接埠,進行監控,如zooInspector、Idea Zookeeper Plugin等
zooInspector示例
Idea Zookeeper Plugin
Kafka Manager 是Yahoo開源的kafka監控和配置的web系統,可以進行kafka的日常監控和配置的動態修改。
Docker Store 地址:https://store.docker.com/community/images/sheepkiller/kafka-manager
docker pull sheepkiller/kafka-manager
運行命令:
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="127.0.0.1:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
2181為上節中部署的zookeeper埠,9000為kafka-manager的web埠
輸出啟動日誌:
image.png
瀏覽器訪問:http://localhost:9000
按照頁面上的操作按鈕進行kafka集群的註冊,具體使用方式再次不做詳細介紹。
image.png
註冊配置後的界面:
image.png
** pom依賴**
<dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>${org.slf4j-version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-1.2-api</artifactId>
<version>${log4j2-version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>${log4j2-version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>${log4j2-version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j2-version}</version>
</dependency>
<dependency>
<groupId>com.lmax</groupId>
<artifactId>disruptor</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.0</version>
</dependency>
</dependencies>
配置log4j2為能正常列印debug日誌,方便進行異常排查 (重要)
在resources目錄下增加log4j2.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="WARN">
<Properties>
<Property name="pattern_layout">%d %-5p (%F:%L) - %m%n</Property>
<Property name="LOG_HOME">/logs</Property>
</Properties>
<appenders>
<Console name="CONSOLE" target="SYSTEM_OUT">
<PatternLayout pattern="%d %-5p (%F:%L) - %m%n"/>
</Console>
</appenders>
<loggers>
<root level="debug" includeLocation="true">
<appender-ref ref="CONSOLE"/>
</root>
</loggers>
</configuration>
關於log4j2的使用,有興趣的可以了解:Log4j1升級Log4j2實戰
package com.moko.kafka;
import org.apache.kafka.clients.producer.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Properties;
public class MokoProducer extends Thread {
private static final Logger LOGGER = LoggerFactory.getLogger(MokoProducer.class);
private final KafkaProducer<String, String> producer;
private final String topic;
private final boolean isAsync;
public MokoProducer(String topic, boolean isAsync) {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "78c4f4a0f989:9092");
properties.put(ProducerConfig.CLIENT_ID_CONFIG, "MokoProducer");
properties.put(ProducerConfig.ACKS_CONFIG, "all");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
this.producer = new KafkaProducer<String, String>(properties);
this.topic = topic;
this.isAsync = isAsync;
}
@Override
public void run() {
int seq = 0;
while (true) {
String msg = "Msg: " + seq;
if (isAsync) {
producer.send(new ProducerRecord<String, String>(this.topic, msg));
} else {
producer.send(new ProducerRecord<String, String>(this.topic, msg),
new MsgProducerCallback(msg));
}
seq++;
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
class MsgProducerCallback implements Callback {
private final String msg;
public MsgProducerCallback(String msg) {
this.msg = msg;
}
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (recordMetadata != null) {
LOGGER.info(msg + " be sended to partition no : " + recordMetadata.partition());
} else {
LOGGER.info("recordMetadata is null");
}
if (e != null)
e.printStackTrace();
}
}
public static void main(String args[]) {
new MokoProducer("access-log", false).start();
}
}
簡單運行後,列印日誌如下:
image.png
package com.moko.kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Arrays;
import java.util.Properties;
public class MokoCustomer {
private static final Logger LOGGER = LoggerFactory.getLogger(MokoCustomer.class);
public static void main(String args[]) throws Exception {
String topicName = "access-log";
Properties props = new Properties();
KafkaConsumer<String, String> consumer = getKafkaConsumer(props);
consumer.subscribe(Arrays.asList(topicName));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
if (!records.isEmpty()) {
LOGGER.info("=========================");
}
for (ConsumerRecord<String, String> record : records) {
LOGGER.info(record.value());
}
}
}
private static KafkaConsumer<String, String> getKafkaConsumer(Properties props) {
props.put("bootstrap.servers", "172.18.153.41:9092");
props.put("group.id", "group-1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
return new KafkaConsumer<String, String>(props);
}
}
簡單運行後,列印日誌如下:
image.png
由於是在本機使用Docker搭建的環境,遇到最多的問題就是網絡問題,如host等的配置,但是只要意識到這點,通過注意分析各種異常日誌,便不難排查解決。
項目目錄結構
致此,本文就介紹完了如何使用Docker搭建 Nginx/Openresty - Kafka - kafkaManager。
後續將會繼續介紹如何使用Docker搭建一套 nginx+lua+kafka實現的日誌收集的教程,敬請期待。
個人介紹:
高廣超:多年一線網際網路研發與架構設計經驗,擅長設計與落地高可用、高性能、可擴展的網際網路架構。
本文首發在 高廣超的簡書博客