로컬에서 빠르게 시작하기
Elasticsearch를 처음 접한다면, Docker로 몇 분 만에 실행해볼 수 있다. 이 글에서는 개발용 단일 노드부터 프로덕션 클러스터까지 단계별로 구축하는 방법을 알아본다.
사전 준비
시스템 요구사항
| 환경 | 최소 사양 | 권장 사양 |
|---|---|---|
| 개발 (단일 노드) | 4GB RAM, 2 CPU | 8GB RAM, 4 CPU |
| 프로덕션 (3노드) | 16GB RAM/노드 | 32GB RAM/노드 |
| 디스크 | SSD 20GB | SSD 100GB+ |
필수 소프트웨어
# Docker 버전 확인
docker --version # 20.10+ 권장
# Docker Compose 버전 확인
docker compose version # v2.0+ 권장
# Kubernetes 환경 (선택)
kubectl version --client
helm version
시스템 설정 (Linux)
Elasticsearch는 많은 파일을 열고 메모리 매핑을 사용한다. Linux에서는 다음 설정이 필요하다:
# vm.max_map_count 설정 (필수)
sudo sysctl -w vm.max_map_count=262144
# 영구 설정
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
# 파일 디스크립터 제한
ulimit -n 65535
Docker Compose: 개발 환경
단일 노드 (가장 간단한 시작)
# docker-compose.yml
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false # 개발용 보안 비활성화
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- es_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -s http://localhost:9200 | grep -q 'cluster_name'"]
interval: 10s
timeout: 10s
retries: 5
volumes:
es_data:
driver: local
# 실행
docker compose up -d
# 상태 확인
curl http://localhost:9200
# 로그 확인
docker compose logs -f elasticsearch
응답 예시:
{
"name": "elasticsearch",
"cluster_name": "docker-cluster",
"cluster_uuid": "abc123...",
"version": {
"number": "8.11.0"
},
"tagline": "You Know, for Search"
}
3노드 클러스터 (프로덕션 유사 환경)
# docker-compose.cluster.yml
version: '3.8'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es01_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- elastic
healthcheck:
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\"status\":\"yellow\"'"]
interval: 30s
timeout: 10s
retries: 5
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es02_data:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es03_data:/usr/share/elasticsearch/data
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://es01:9200
ports:
- "5601:5601"
networks:
- elastic
depends_on:
es01:
condition: service_healthy
networks:
elastic:
driver: bridge
volumes:
es01_data:
es02_data:
es03_data:
# 클러스터 실행
docker compose -f docker-compose.cluster.yml up -d
# 클러스터 상태 확인
curl http://localhost:9200/_cluster/health?pretty
# 노드 목록 확인
curl http://localhost:9200/_cat/nodes?v
보안 활성화 (Elasticsearch 8.x)
Elasticsearch 8.x부터 보안이 기본 활성화된다. 개발 환경에서도 보안을 적용하려면:
# docker-compose.secure.yml
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: elasticsearch-secure
environment:
- discovery.type=single-node
- ELASTIC_PASSWORD=changeme # 반드시 변경
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=false # 개발용 HTTP 허용
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
volumes:
- es_secure_data:/usr/share/elasticsearch/data
volumes:
es_secure_data:
# 인증 포함 요청
curl -u elastic:changeme http://localhost:9200
# 사용자 생성
curl -u elastic:changeme -X POST "localhost:9200/_security/user/app_user" \
-H "Content-Type: application/json" -d'
{
"password": "app_password",
"roles": ["superuser"]
}'
Kubernetes: 프로덕션 환경
Helm Chart로 설치
Elastic 공식 Helm Chart를 사용하면 프로덕션 수준의 클러스터를 쉽게 배포할 수 있다.
# Elastic Helm 레포지토리 추가
helm repo add elastic https://helm.elastic.co
helm repo update
# 네임스페이스 생성
kubectl create namespace elastic
# 기본 설치
helm install elasticsearch elastic/elasticsearch \
--namespace elastic \
--set replicas=3
커스텀 Values 파일
# values-production.yaml
clusterName: "production-search"
nodeGroup: "master"
replicas: 3
roles:
- master
- data
- ingest
resources:
requests:
cpu: "1000m"
memory: "4Gi"
limits:
cpu: "2000m"
memory: "8Gi"
# JVM 힙 설정 (메모리의 50%)
esJavaOpts: "-Xmx4g -Xms4g"
# 볼륨 설정
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
storageClassName: "gp3" # AWS EBS gp3
# Pod 안티어피니티 (노드 분산)
antiAffinity: "hard"
# 보안 설정
protocol: https
createCert: true
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.http.ssl.enabled: true
# 시크릿 (별도 생성 필요)
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-credentials
key: password
# 리소스 제한
sysctlInitContainer:
enabled: true
# Pod 우선순위
priorityClassName: "high-priority"
# 시크릿 생성
kubectl create secret generic elasticsearch-credentials \
--namespace elastic \
--from-literal=password='StrongPassword123!'
# 커스텀 values로 설치
helm install elasticsearch elastic/elasticsearch \
--namespace elastic \
-f values-production.yaml
클러스터 상태 확인
# Pod 상태 확인
kubectl get pods -n elastic -l app=elasticsearch-master
# 서비스 확인
kubectl get svc -n elastic
# 클러스터 상태 확인 (포트 포워딩)
kubectl port-forward svc/elasticsearch-master 9200:9200 -n elastic
# 다른 터미널에서
curl -k -u elastic:StrongPassword123! https://localhost:9200/_cluster/health?pretty
Kibana 추가 설치
# values-kibana.yaml
elasticsearchHosts: "https://elasticsearch-master:9200"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
# Elasticsearch 인증
extraEnvs:
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-credentials
key: password
- name: ELASTICSEARCH_SSL_VERIFICATIONMODE
value: "none"
# Ingress 설정 (선택)
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: kibana.example.com
paths:
- path: /
tls:
- secretName: kibana-tls
hosts:
- kibana.example.com
helm install kibana elastic/kibana \
--namespace elastic \
-f values-kibana.yaml
Hot-Warm-Cold 아키텍처
대규모 로그 데이터를 위한 티어드 스토리지 구성:
# values-hot.yaml
nodeGroup: "hot"
roles:
- data_hot
- ingest
replicas: 3
resources:
requests:
memory: "8Gi"
volumeClaimTemplate:
storageClassName: "gp3-iops" # 고성능 SSD
resources:
requests:
storage: 200Gi
esConfig:
elasticsearch.yml: |
node.attr.data: hot
# values-warm.yaml
nodeGroup: "warm"
roles:
- data_warm
replicas: 2
resources:
requests:
memory: "4Gi"
volumeClaimTemplate:
storageClassName: "gp2" # 일반 SSD
resources:
requests:
storage: 500Gi
esConfig:
elasticsearch.yml: |
node.attr.data: warm
# values-cold.yaml
nodeGroup: "cold"
roles:
- data_cold
replicas: 2
resources:
requests:
memory: "2Gi"
volumeClaimTemplate:
storageClassName: "sc1" # HDD
resources:
requests:
storage: 2000Gi
esConfig:
elasticsearch.yml: |
node.attr.data: cold
# 각 티어 설치
helm install es-master elastic/elasticsearch -n elastic -f values-master.yaml
helm install es-hot elastic/elasticsearch -n elastic -f values-hot.yaml
helm install es-warm elastic/elasticsearch -n elastic -f values-warm.yaml
helm install es-cold elastic/elasticsearch -n elastic -f values-cold.yaml
OpenSearch Docker/Kubernetes 설치
OpenSearch Docker Compose
# docker-compose.opensearch.yml
version: '3.8'
services:
opensearch-node1:
image: opensearchproject/opensearch:2.11.0
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- DISABLE_INSTALL_DEMO_CONFIG=true # 데모 설정 비활성화
- DISABLE_SECURITY_PLUGIN=true # 개발용 보안 비활성화
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- "9200:9200"
- "9600:9600"
networks:
- opensearch-net
opensearch-node2:
image: opensearchproject/opensearch:2.11.0
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- DISABLE_INSTALL_DEMO_CONFIG=true
- DISABLE_SECURITY_PLUGIN=true
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.11.0
container_name: opensearch-dashboards
ports:
- "5601:5601"
environment:
- OPENSEARCH_HOSTS=["http://opensearch-node1:9200","http://opensearch-node2:9200"]
- DISABLE_SECURITY_DASHBOARDS_PLUGIN=true
networks:
- opensearch-net
networks:
opensearch-net:
volumes:
opensearch-data1:
opensearch-data2:
OpenSearch Kubernetes (Helm)
# OpenSearch Helm 레포지토리 추가
helm repo add opensearch https://opensearch-project.github.io/helm-charts/
helm repo update
# 설치
helm install opensearch opensearch/opensearch \
--namespace opensearch \
--create-namespace \
--set replicas=3
# Dashboards 설치
helm install dashboards opensearch/opensearch-dashboards \
--namespace opensearch
설치 검증
기본 동작 확인
# 클러스터 상태
curl -X GET "localhost:9200/_cluster/health?pretty"
# 노드 정보
curl -X GET "localhost:9200/_cat/nodes?v"
# 인덱스 목록
curl -X GET "localhost:9200/_cat/indices?v"
# 테스트 문서 색인
curl -X POST "localhost:9200/test/_doc/1" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, Elasticsearch!"}'
# 검색
curl -X GET "localhost:9200/test/_search?pretty" \
-H "Content-Type: application/json" \
-d '{"query": {"match_all": {}}}'
성능 확인
# 클러스터 통계
curl -X GET "localhost:9200/_cluster/stats?pretty"
# 노드 통계
curl -X GET "localhost:9200/_nodes/stats?pretty"
# 핫 스레드 확인
curl -X GET "localhost:9200/_nodes/hot_threads"
트러블슈팅
자주 발생하는 문제
1. vm.max_map_count 오류
max virtual memory areas vm.max_map_count [65530] is too low
# 해결
sudo sysctl -w vm.max_map_count=262144
2. 메모리 부족
java.lang.OutOfMemoryError: Java heap space
# 힙 크기 조정
environment:
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
3. 디스크 워터마크
flood stage disk watermark [95%] exceeded
# 워터마크 조정 (임시)
curl -X PUT "localhost:9200/_cluster/settings" \
-H "Content-Type: application/json" \
-d '{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.flood_stage": "97%"
}
}'
4. 클러스터 연결 실패
# 네트워크 확인
docker network inspect elastic
# DNS 확인 (Kubernetes)
kubectl exec -it es-master-0 -n elastic -- nslookup elasticsearch-master
마무리
이 글에서 다룬 내용:
- Docker 단일 노드: 개발용 빠른 시작
- Docker 클러스터: 프로덕션 유사 환경
- Kubernetes Helm: 프로덕션 배포
- Hot-Warm-Cold: 대규모 로그 아키텍처
- OpenSearch: 대안 설치 방법
다음 글에서는 매핑과 필드 타입을 알아본다. 데이터를 어떻게 구조화하고, 검색 성능을 위해 어떤 필드 타입을 선택해야 하는지 자세히 다룬다.