GHE server version: 2.19.2

Reference

https://github.community/t5/GitHub-Enterprise-Best-Practices/High-Availability-and-Disaster-Recovery-for-GitHub-Enterprise/ba-p/11725

https://help.github.com/en/enterprise/2.19/admin/installation/configuring-github-enterprise-server-for-high-availability

HA (Normal Case - Primary & Replica normal status)

Primary Side

Mode change to 'Maintnance mode'

Run command

  • ghe-maintenance -s

admin@githubtest-testdomain-io:~$ ghe-maintenance -s

 

 

Replica Side

Change AWS Route 53 IP address

  • primary → replica

 

Run command (GHE status change replica to primary)

  • ghe-repl-promote
admin@ip-172-3?-???-???:~$ ghe-repl-promote
Warning: You are about to promote this Replica node
Promoting this Replica will tear down replication and enable maintenance mode on the current Primary.
All other Replicas need to be re-setup to use this new Primary server.
 
Proceed with promoting this appliance to Primary? [y/N] y
Enabling maintenance mode on the primary to prevent writes ...
Stopping replication ...
  | Stopping Pages replication ...
  | Stopping Git replication ...
  | Stopping Alambic replication ...
  | Stopping git-hooks replication ...
  | Stopping MySQL replication ...
  | Stopping Redis replication ...
  | Stopping Consul replication ...
  | Success: Replication was stopped for all services.
  | To disable replica mode and remove all replica configuration, run 'ghe-repl-teardown'.
Switching out of replica mode ...
  | Dec 04 07:41:45 Preparing storage device...
  | Dec 04 07:41:46 Updating configuration...
  | Dec 04 07:41:46 Reloading system services...
  | Dec 04 07:42:18 Running migrations...
  | Dec 04 07:42:54 Reloading application services...
  | Dec 04 07:43:18 Done!
  | jq: error (at :0): Cannot index number with string "settings"
  | Success: Replication configuration has been removed.
  | Run `ghe-repl-setup' to re-enable replica mode.
Applying configuration and starting services ...
  | ERROR: cannot launch /usr/local/bin/ghe-single-config-apply - run is locked
admin@ip-172-3?-???-???:~$

 

Former Primary Side

Former primary mode change to replica

Run command

  • ghe-repl-setup 172.3?.???.???
  • ghe-repl-start
  • ghe-repl-status
  • ghe-maintenance -u (maintnace mode unset)
admin@githubtest-testdomain.io:~$ ghe-repl-setup 172.3?.???.???
Warning: This appliance is or has been a configured appliance.
Proceeding will overwrite data on this appliance.
 
Proceed with initializing this appliance as a replica? [y/N] y
Verifying ssh connectivity with 172.3?.???.??? ...
Connection check succeeded.
Updating Elasticsearch configuration ...
Copying license and settings from primary appliance ...
 --> Importing SSH host keys...
 --> The SSH host keys on this appliance have been replaced to match the primary.
 --> Please run 'ssh-keygen -R 172.3?.XXX.XXX; ssh-keygen -R "[172.3?.XXX.XXX]:122"' on your client to prevent future ssh warnings.
Copying custom CA certificates from primary appliance ...
Success: Replica mode is configured against 172.3?.???.???.
To disable replica mode and undo these changes, run 'ghe-repl-teardown'.
Run 'ghe-repl-start' to start replicating from the newly configured primary.
 
admin@githubtest-testdomain.io:~$ ghe-repl-start
Verifying ssh connectivity with 172.3?.???.??? ...
Updating configuration...
Validating configuration
Updating configuration for githubtest-testdomain.io-primary (172.3?.???.???)
Configuration Updated
Configuration Phase 1
githubtest-testdomain.io-replica: Dec 04 07:49:43 Preparing storage device...
githubtest-testdomain.io-replica: Dec 04 07:49:45 Updating configuration...
githubtest-testdomain.io-replica: Dec 04 07:49:45 Reloading system services...
githubtest-testdomain.io-replica: Dec 04 07:50:04 Done!
githubtest-testdomain.io-primary: Dec 04 07:49:43 Preparing storage device...
githubtest-testdomain.io-primary: Dec 04 07:49:45 Updating configuration...
githubtest-testdomain.io-primary: Dec 04 07:49:45 Reloading system services...
githubtest-testdomain.io-primary: Dec 04 07:50:10 Done!
Configuration Phase 2
githubtest-testdomain.io-replica: Dec 04 07:50:13 Running migrations...
githubtest-testdomain.io-replica: Dec 04 07:50:13 Done!
githubtest-testdomain.io-primary: Dec 04 07:50:15 Running migrations...
githubtest-testdomain.io-primary: Dec 04 07:50:47 Done!
Configuration Phase 3
githubtest-testdomain.io-primary: Waiting for services to be active...
githubtest-testdomain.io-primary: Dec 04 07:51:06 Reloading application services...
githubtest-testdomain.io-primary: Dec 04 07:51:29 Done!
githubtest-testdomain.io-replica: Dec 04 07:50:48 Reloading application services...
githubtest-testdomain.io-replica: Dec 04 07:51:59 Done!
Finished cluster configuration
Success: replication is running for all services.
Run `ghe-repl-status' to monitor replication health and progress.
 
 
 
admin@githubtest-testdomain.io:~$ ghe-repl-status
OK: mysql replication is in sync
OK: redis replication is in sync
OK: elasticsearch cluster is in sync
OK: git replication is in sync
OK: pages replication is in sync
OK: alambic replication is in sync
OK: git-hooks replication is in sync
OK: consul replication is in sync
 
 
 
admin@githubtest-testdomain.io:~$ ghe-maintenance -u

 

HA (Disaster case - Primary EC2 terminated)

If Primary EC2 instance terminated.

Just run this command replica side

  • ghe-repl-promote

And change Route53 DNS IP address to replica

 

admin@githubtest-testdomain.io-replica:~$ ghe-repl-promote
Warning: You are about to promote this Replica node
Promoting this Replica will tear down replication and enable maintenance mode on the current Primary.
All other Replicas need to be re-setup to use this new Primary server.
 
Proceed with promoting this appliance to Primary? [y/N] y
ssh: connect to host 172.3?.???.??? port 122: Connection timed out
Warning: Primary node is unavailable.
Warning: Performing hard failover without cleaning up on the primary side.
Stopping replication ...
  | Skipping Pages, Alambic, git-hooks and Git replication cleanup on primary ...
  | Stopping MySQL replication ...
  | Stopping Redis replication ...
  | Stopping Consul replication ...
  | Success: Replication was stopped for all services.
  | To disable replica mode and remove all replica configuration, run 'ghe-repl-teardown'.
Switching out of replica mode ...
  | ssh: connect to host 172.3?.???.??? port 122: Connection timed out
  | ssh: connect to host 172.3?.???.??? port 122: Connection timed out
  | ssh: connect to host 172.3?.???.??? port 122: Connection timed out
  | ssh: connect to host 172.3?.???.??? port 122: Connection timed out
  | ssh: connect to host 172.3?.???.??? port 122: No route to host
  | jq: error (at :0): Cannot index number with string "settings"
  | jq: error (at :0): Cannot index number with string "settings"
  | Success: Replication configuration has been removed.
  | Run `ghe-repl-setup' to re-enable replica mode.
Applying configuration and starting services ...
Success: Replica has been promoted to primary and is now accepting requests.

 

GHE server version: 2.19.2

Reference

https://github.community/t5/GitHub-Enterprise-Best-Practices/High-Availability-and-Disaster-Recovery-for-GitHub-Enterprise/ba-p/11725

https://help.github.com/en/enterprise/2.19/admin/installation/configuring-github-enterprise-server-for-high-availability

 

Configuring GitHub Enterprise Server for high availability - GitHub Help

Administrator Guides Installation and configuration Configuring GitHub Enterprise Server for high availability Configuring GitHub Enterprise Server for high availability GitHub Enterprise Server supports a high availability mode of operation designed to mi

help.github.com

Create 1 more EC2 instance for GHE image.

Replication server could use same license file.

Select installation type

  • Configure as Replica

After finish setup.

Run command (Replica VM)

  • ghe-repl-setup $PRIMARY_VM_IP_ADDRESS
admin@ip-172-3?-???-???:~$ ghe-repl-setup 172.3?.???.???
Generating public/private ed25519 key pair.
/home/admin/.ssh/id_ed25519 already exists.
Overwrite (y/n)? Your identification has been saved in /home/admin/.ssh/id_ed25519.
Your public key has been saved in /home/admin/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:t3C46zQrYB/???????????????????????s1dc/e4Ho admin-ssh-key
The key's randomart image is:
+--[ED25519 256]--+
| o++o. . .       |
|..o.* + . o      |
|.. = O     +     |
|  . * o  .o o    |
|   o * oS.oo .   |
|    B O o=..     |
|   o * *+..E     |
|      =. +.      |
|       o+        |
+----[SHA256]-----+
Connection check failed.
The primary GitHub Enterprise Server appliance must be configured to allow replica access.
Visit http://172.3?.???.???/setup/settings and authorize the following SSH key:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJev???????????????????????YEf1Kvx7AyBAduoMe admin-ssh-key

Run `ghe-repl-setup 172.3?.???.???' once the key has been added to continue replica setup

 

Use command result key

  • ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJev?????????????????wYEf1Kvx7AyBAduoMe admin-ssh-key

Visit Primary & add ssh key

  • Visit http://172.3?.???.???/setup/settings and authorize the following SSH key

 

Run command (Replica VM)

  • ghe-repl-setup $PRIMARY_VM_IP_ADDRESS
  • ghe-repl-start
  • ghe-repl-status
admin@ip-172-3?-???-???:$ ghe-repl-setup 172.3?.???.???
Verifying ssh connectivity with 172.3?.???.??? ...
Connection check succeeded.
Updating Elasticsearch configuration ...
Elasticsearch isn't listening on tcp/9200.
Copying license and settings from primary appliance ...
 --> Importing SSH host keys...
 --> The SSH host keys on this appliance have been replaced to match the primary.
 --> Please run 'ssh-keygen -R 172.3X.XX.XXX; ssh-keygen -R "[172.3X.XX.XXX]:122"' on your client to prevent future ssh warnings.
Copying custom CA certificates from primary appliance ...
Success: Replica mode is configured against 172.3?.???.???.
To disable replica mode and undo these changes, run 'ghe-repl-teardown'.
Run 'ghe-repl-start' to start replicating from the newly configured primary.
 
 
 
admin@ip-172-3?-???-???:$ ghe-repl-start
Verifying ssh connectivity with 172.3?.???.??? ...
Updating configuration...
Validating configuration
Updating configuration for githubtest-testserver-io-primary (172.3?.???.???)
Configuration Updated
Configuration Phase 1
githubtest-testserver-io-primary: Dec 04 05:46:52 Preparing storage device...
githubtest-testserver-io-primary: Dec 04 05:46:54 Updating configuration...
githubtest-testserver-io-primary: Dec 04 05:46:55 Reloading system services...
githubtest-testserver-io-primary: Dec 04 05:47:19 Done!
githubtest-testserver-io-replica: Dec 04 05:46:52 Preparing storage device...
githubtest-testserver-io-replica: Dec 04 05:46:54 Updating configuration...
githubtest-testserver-io-replica: Dec 04 05:46:55 Reloading system services...
githubtest-testserver-io-replica: Dec 04 05:48:11 Done!
Configuration Phase 2
githubtest-testserver-io-replica: Dec 04 05:48:12 Running migrations...
githubtest-testserver-io-replica: Dec 04 05:48:12 Done!
githubtest-testserver-io-primary: Dec 04 05:48:12 Running migrations...
githubtest-testserver-io-primary: Dec 04 05:48:28 Done!
Configuration Phase 3
githubtest-testserver-io-primary: Waiting for services to be active...
githubtest-testserver-io-primary: Dec 04 05:48:47 Reloading application services...
githubtest-testserver-io-primary: Dec 04 05:49:10 Done!
githubtest-testserver-io-replica: Dec 04 05:48:30 Reloading application services...
githubtest-testserver-io-replica: Dec 04 05:50:06 Done!
Finished cluster configuration
Success: replication is running for all services.
Run `ghe-repl-status' to monitor replication health and progress.
 
 
 
admin@ip-172-3?-???-???:$ ghe-repl-status
OK: mysql replication is in sync
OK: redis replication is in sync
OK: elasticsearch cluster is in sync
OK: git replication is in sync
OK: pages replication is in sync
OK: alambic replication is in sync
OK: git-hooks replication is in sync
OK: consul replication is in sync

GHE server version: 2.19.2

License

https://enterprise.github.com/dashboard (Free Trial or Purchase license)

After Sign in, click 'Download' on top of the menu.

 

Step 1. Download license

 

Appliance

Step 2. Appliance

Select Cloud vendor & region

I selected AWS & Mumbai region.

 

AWS 'EC2' Dashboard & Click Launch Instance

Search AMI with AMI ID: ami-06e551001e2d40c1a

Set EC2 and launch 

  • Storage add EBS at least 10GB

Install

Read manual 1st.

https://help.github.com/en/enterprise/2.19/admin/installation/installing-github-enterprise-server-on-aws

 

Installing GitHub Enterprise Server on AWS - GitHub Help

Administrator Guides Installation and configuration Setting up a GitHub Enterprise Server instance Installing GitHub Enterprise Server on AWS Installing GitHub Enterprise Server on AWS To install GitHub Enterprise Server on Amazon Web Services (AWS), you m

help.github.com

 

Need to open 122(ssh), 8443(https) TCP port before start setup

 

Upload your license file & set password.

 

Select installation type

  • New Install

 

Route53 add dns

 

set hostname & Test domain settings

 

SSL setup & Click Save settings

 

Finish setup & Click 'Save settings' 

 

After finish setup and hostname not working. Then use this link

 

ElasticSearch Version check

  • 6.8

 

Download logstash pkg

Link

Install Command

  • wget https://artifacts.elastic.co/downloads/logstash/logstash-6.8.0.rpm
  • sudo yum install logstash-6.8.0.rpm

Install amazon_es plugin for logstash

  • /usr/share/logstash/bin/logstash-plugin list logstash-output

IF 'amazon_es' not exist, install amazone_es for logstash output

  • sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-amazon_es

감사(audit)에서 access control을 하는데 

centos 각 서버의 계정에 대해서 이를 관리하는것이 너무 불편한다.

Kerberos를 도입한다.

도입했던 이력을 아래에 정리한다.

 

목차

  • kerberos server
  • kerberos client
  • macos

Kerberos Server

Install

1. 작업 환경

$ cat /etc/system-release
Amazon Linux release 2 (Karoo)

2. 설치한 패키지 목록

  • krb5 관련
  • ntp
$ sudo yum list installed | grep krb
krb5-devel.x86_64                     1.15.1-19.amzn2.0.3            @amzn2-core
krb5-libs.x86_64                      1.15.1-19.amzn2.0.3            @amzn2-core
krb5-server.x86_64                    1.15.1-19.amzn2.0.3            @amzn2-core
krb5-workstation.x86_64               1.15.1-19.amzn2.0.3            @amzn2-core
pam_krb5.x86_64                       2.4.8-6.amzn2.0.2              @amzn2-core

$ sudo yum list installed | grep ntp
fontpackages-filesystem.noarch        1.44-8.amzn2                   @amzn2-core

참고링크

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/managing_smart_cards/installing-kerberos

https://gist.github.com/ashrithr/4767927948eca70845db

 

설정

1. EC2 인스턴스 2대 생성

- master, slave로 HA 구성했다.

 

2. DNS 설정

Route 53에 2개의 도메인을 추가했다. (abcdef.com은 실제 도메인이름으로 바꿔야함

- kdc.abcdef.com

- kdc2.abcdef.com

 

 

설정파일

 

1. /etc/krb5.conf 설정

- realm에 세팅할 도메인은 '대문자'로 해야한다.

$ cat /etc/krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 kdc = FILE:/var/log/kerberos/krb5kdc.log
 admin_server = FILE:/var/log/kerberos/kadmin.log
 default = FILE:/var/log/kerberos/krb5lib.log

[libdefaults]
 default_realm = ABCDEF.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 rdns = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 ABCDEF.COM = {
  kdc = kdc.abcdef.com:88
  kdc = kdc2.abcdef.com:88
  admin_server = kdc.abcdef.com:749
  default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha384-192 aes128-cts-hmac-sha256-128 des3-cbc-sha1 arcfour-hmac-md5 camellia256-cts-cmac camellia128-cts-cmac des-cbc-crc des-cbc-md5 des-cbc-md4
  default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha384-192 aes128-cts-hmac-sha256-128 des3-cbc-sha1 arcfour-hmac-md5 camellia256-cts-cmac camellia128-cts-cmac des-cbc-crc des-cbc-md5 des-cbc-md4
  permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha384-192 aes128-cts-hmac-sha256-128 des3-cbc-sha1 arcfour-hmac-md5 camellia256-cts-cmac camellia128-cts-cmac des-cbc-crc des-cbc-md5 des-cbc-md4
 }

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM   

 

 

2. /var/kerberos/krb5kdc/kdc.conf 설정

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 ABCDEF.COM = {
  kadmind_port = 749
  max_life = 9h 0m 0s
  max_renewable_life = 7d 0h 0m 0s
  master_key_type = des3-hmac-sha1
  supported_enctypes = aes256-cts-hmac-sha1-96:normal aes128-cts-hmac-sha1-96:normal des3-cbc-sha1:normal arcfour-hmac-md5:normal
  database_name = /var/kerberos/krb5kdc/principal
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /var/kerberos/krb5kdc/kadm5.dict
  key_stash_file = /var/kerberos/krb5kdc/.k5.ABCDEF.COM
 }

 

 

3. /var/kerberos/krb5kdc/kadm5.acl 설정

*/admin@ABCDEF.COM  *

 

나머지 세팅

 

1. KDC database 생성

kdb5_util create -r ABCDEF.COM -s

 

 

2. KDC admin 생성

# kadmin.local
kadmin.local:  addprinc account/admin@ABCDEF.COM
     
NOTICE: no policy specified for "admin/admin@ABCDEF.COM";
assigning "default".

Enter password for principal admin/admin@ATHENA.MIT.EDU:  (Enter a password.)
Re-enter password for principal admin/admin@ATHENA.MIT.EDU: (Type it again.)

Principal "admin/admin@ABCDEF.COM" created.
 
kadmin.local:

 

 

3. KDC database 백업 (링크에 복구 방법도 있음)

#!/bin/bash

/usr/sbin/kdb5_util dump /var/kerberos/slave_datatrans


/usr/sbin/kprop -f /var/kerberos/slave_datatrans mgmt-krb-kdc02.abcdef.com > /dev/null
  • 참고로 전달할 도메인 정보는 /etc/hosts에 있다.
% cat /etc/hosts

... 
10.100.125.156     mgmt-krb-kdc02.abcdef.com mgmt-krb-kdc02

 

4. 데몬 설정

systemctl start krb5kdc.service
systemctl start kadmin.service
systemctl enable krb5kdc.service
systemctl enable kadmin.service

 

Kerberos Client

Install

yum --disablerepo=*  --enablerepo=base,update install -y dmidecode krb5-libs

설정

참고로 ec2 1대에 nginx를 두고 스크립트를 불러다가 client로 동작할 서버에 세팅을 한다. 
대략 이런 명령을 이용함
curl -s krb5-client.abcdef.com/seeds/krb-svr-config | /bin/bash

위의 ec2에 flask로 간단하게 구현한 api 서버를 두고 잡다한(계정 추가, 변경, 삭제, ...) 처리를한다.

파일설정

1. /etc/hosts 설정 

  • 필요에 따라서 설정한다.

2. /etc/ssh/sshd_config 설정

  • 필요에 따라서 설정한다.

3. ntp update 수행

ntpdate -u pool.ntp.org

4. kadmin에서 신규 서버 principal 등록, keytab 생성

# addpric
/usr/bin/kadmin -p account/admin -w RkaWkrdldi -q "addprinc -randkey host/dev1-api-all.abcdef.com"

# ktadd
/usr/bin/kadmin -p account/admin -w RkaWkrdldi -q ktadd -k "/home/ec2-user/seeds/keytabs/dev1-api-all.abcdef.com host/dev1-api-all.abcdef.com"

# chmod
chmod og+r /home/ec2-user/seeds/keytabs/dev1-api-all.abcdef.com

5. kdc hosts 파일에 등록

cat /home/ec2-user/seeds/hosts

10.100.56.52      dev1-api-lucky21.abcdef.com       dev1-api-lucky201              
10.100.56.51      dev1-api-lucky11.abcdef.com       dev1-api-lucky101              
10.100.56.50      dev1-api-lucky01.abcdef.com       dev1-api-lucky001              
10.100.56.21      dev1-api-point11.abcdef.com     dev1-api-point101                
10.100.56.22      dev1-api-point12.abcdef.com     dev1-api-point12                 
10.100.56.20      dev1-api-point01.abcdef.com     dev1-api-point001                
10.100.56.23      dev1-api-point21.abcdef.com     dev1-api-point201                
10.100.56.24      dev1-api-point22.abcdef.com     dev1-api-point22                 
10.100.0.162      dev1-proxy-out21.abcdef.com        dev1-proxy-out201             
10.100.0.161      dev1-proxy-out11.abcdef.com        dev1-proxy-out101    

6. 이번엔 4단계에서 생성했던 keytab 파일을 실제 kerberos client에 복사해둔다.

/etc/krb5.keytab

 

addprinc, ktadd 이렇게하고 생성한 keytab 파일을 kerberos client에 옮겨두어야 한다. 

그리고 그와 관련해서 hosts 파일도 갱신해야한다.

이를 좀 편리하게 하기 위해서 ec2에 nginx + gunicorn + flask로 구성한 api 서버를 두고 처리하고 있다.

 

MacOS 사용자 설정

내 pc에서 ssh로 접근할 때 사용할거라서 내거에도 설정해야한다.

 

1. /etc/krb5.conf 설정

[libdefaults]
default_realm = ABCDEF.COM
allow_weak_crypto = false
rdns = false

[realms]
ABCDEF.COM = {
kdc = kdc.abcdef.com
kdc = kdc2.abcdef.com
admin_server = kdc.abcdef.com
kpasswd_server = kdc.abcdef.com
}

 

2.  /etc/ssh/ssh_config 설정

  • 참고로 MacOS update 하고나면 이 설정이 원복 되기도 하므로 update 이후엔 확인 해보아야 한다.
GSSAPIAuthentication yes    => Allow authentication protocol for ssh kerberos support
StrictHostKeyChecking no     

3. MacOS의 터미널에 접속해서 kinit을 한다.

kinit --kdc-hostname=kdc.abcdef.com,kdc2.abcdef.com sfixer@ABCDEF.COM

그리고 나서 제대로 되는지 확인한다.

% ssh sfixer@10.100.125.143
% ssh sfixer@dev1-api-all
...

'기타' 카테고리의 다른 글

TCP 통계수치 변화 (장애 상황)  (0) 2020.07.15
TCP 통계  (0) 2020.07.14
Kerberos Setup (KR)  (0) 2019.08.22
500 Error on Confluence Startup (KR)  (2) 2019.07.29
squid proxy  (0) 2018.12.19
reverse proxy 개념  (0) 2018.12.19

시작하려니 막막하다. 

샘플을 참고해서 원하는 기능을 추가해 나가야겠다.

 

Syntax

 

 

Sample codes

_

 

 

'Language > Flutter' 카테고리의 다른 글

Flutter source code samples  (1) 2019.08.16
flutter VS Code setting (MacOS)  (1) 2019.08.09
  1. Flutter App 2019.10.10 18:01

    Nice. Thanks.

flutter로 개발해나가는 것으로 결정하고 에디터를 설정

vim으로는 대책이 없어서 vscode를 사용함.

 

 

https://code.visualstudio.com/

 

Visual Studio Code - Code Editing. Redefined

Visual Studio Code is a code editor redefined and optimized for building and debugging modern web and cloud applications.  Visual Studio Code is free and available on your favorite platform - Linux, macOS, and Windows.

code.visualstudio.com

VSCode 다운로드하고 flutter plugin을 설치

 

terminal에서 `flutter doctor` 명령어를 치거나  아래의 기능을 사용

SDK를 설치한다.

 

원하는 OS를 선택한다.

 

https://flutter.dev/docs/get-started/install/macos

 

MacOS install

 

flutter.dev

 

하나씩 따라가면서 설치한다.

 

 

 

flutter doctor의 결과 메시지를 읽고 그대로 설치해주면 된다.

 

전부 설정 완료

 

이후에는 터미널에서

`flutter create my_project`로 프로젝트를 생성하고

그 생성된 디렉토리안에서

`flutter run` 해주면 된다.

물론 android, ios 2개라서

`flutter run -d emulator-599055` 같이 device를 선택해주어야한다.

 

기본 동작 확인 완료.

'Language > Flutter' 카테고리의 다른 글

Flutter source code samples  (1) 2019.08.16
flutter VS Code setting (MacOS)  (1) 2019.08.09
  1. 2019.11.12 23:56

    비밀댓글입니다

기존에 AWS EC2의 m3.large에 jira, confluence를 설치형으로 운영하고 있었다.

(nginx, mysql도 같은 인스턴스에 설치하여 운용)

 

최근들어서 메모리 부족 이슈가 자꾸 발생한다.

  •  java.lang.OutOfMemoryError: Java heap space

일단은 스케줄 작업 시간을 분산해보았다.

confluence schedule job

 

몇일 괜찮더니 여전히 또 메모리 부족으로 confluence 접근이 안된다.

EC2 사양을 높이고 m3.large -> m3.xlarge로 (7.5G -> 15G) 메모리를 높였다.

 

메모리 사용하는 프로세스

  • java/confluence
  • java/jira
  • nginx
  • mysql

 

startup.sh 명령을 실행해서 띄웠다.

 

안되고 500 ERROR가 난다.

29-Jul-2019 01:51:06.145 SEVERE [http-nio-8090-exec-2] org.apache.catalina.core.StandardHostValve.custom Exception Processing ErrorPage[errorCode=500, location=/500page.jsp]

 com.atlassian.util.concurrent.LazyReference$InitializationException: java.lang.NullPointerException

        at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:149)

        at com.atlassian.util.concurrent.LazyReference.get(LazyReference.java:112)

        at com.atlassian.confluence.plugin.servlet.filter.ServletFilterModuleContainerFilter.getServletModuleManager(ServletFilterModuleContainerFilter.java:23)

        at com.atlassian.plugin.servlet.filter.ServletFilterModuleContainerFilter.doFilter(ServletFilterModuleContainerFilter.java:68)

        at com.atlassian.plugin.servlet.filter.ServletFilterModuleContainerFilter.doFilter(ServletFilterModuleContainerFilter.java:63)

        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)

        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

        at com.atlassian.confluence.web.filter.DebugFilter.doFilter(DebugFilter.java:50)

        at com.atlassian.core.filters.AbstractHttpFilter.doFilter(AbstractHttpFilter.java:31)

        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)

        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

        at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:721)

        at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:468)

        at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:391)

        at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:318)

        at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:439)

        at org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:305)

        at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:399)

        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)

        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)

        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)

        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518)

        at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091)

        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668)

        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1521)

        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1478)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NullPointerException

        at com.atlassian.spring.container.ContainerManager.getComponent(ContainerManager.java:33)

        at com.atlassian.confluence.util.LazyComponentReference$Accessor.get(LazyComponentReference.java:46)

        at com.atlassian.util.concurrent.Lazy$Strong.create(Lazy.java:85)

        at com.atlassian.util.concurrent.LazyReference$Sync.run(LazyReference.java:321)

        at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:143)

        ... 29 more

 

아...... 일단 구글링

https://www.google.com/search?newwindow=1&ei=pFg-XcOuMpXMvwTrhKbABA&q=org.apache.catalina.core.StandardHostValve.custom+Exception+Processing+ErrorPage%5BerrorCode%3D500%2C+location%3D%2F500page.jsp%5D&oq=org.apache.catalina.core.StandardHostValve.custom+Exception+Processing+ErrorPage%5BerrorCode%3D500%2C+location%3D%2F500page.jsp%5D&gs_l=psy-ab.3...12614.12614..12886...0.0..0.0.0.......0....2j1..gws-wiz.Xc4qbAH0AaA&ved=0ahUKEwjDkOSgidnjAhUV5o8KHWuCCUgQ4dUDCAo&uact=5

 

org.apache.catalina.core.StandardHostValve.custom Exception Processing ErrorPage[errorCode=500, location=/500page.jsp] - Google

2013. 5. 24. · Well, we get a 500 error on the home page after starting up at least. The error we see in the ... StandardHostValve custom SEVERE: Exception Processing ErrorPage[errorCode=500, location=/500page.jsp] java.lang. NullPointerException at ... do

www.google.com

 

다행히 1번째 결과가 나의 상황과 유사하다.

링크를 열어보니 희망이 생긴다.

https://community.atlassian.com/t5/Confluence-questions/500-Error-on-Confluence-Startup/qaq-p/393445

 

500 Error on Confluence Startup

I am unable to start up our Confluence instance. Well, we get a 500 error on the home page after starting up at least. The error we see in the web page is: exception com.atlassian.util.concurrent.LazyReference$InitializationException: java.lang.NullPointer

community.atlassian.com

 

링크의 댓글을 따라가본다. (That did it for me, thanks! 라고 한다. 이거 왠지 될거 같다.)

 

How to clear Confluence plugins cache - Atlassian Documentation

How to clear Confluence plugins cache

confluence.atlassian.com

왠지 느낌이 이거같다. 플러그인 캐쉬 문제

 

solution for this error

다 풀릴것 같은 문제가 <confluence-home>을 못찾고 또 지연된다.

결국 find로 뒤져보니 저 디렉토리가 똑같이 존재한다.

 

언급된 디렉토리를 삭제하진 않고 일단 mv 명령어로 다른 path로 옮기고

confluence를 재시작하였다.

 

시작하면서 저 디렉토리들을 새로 만든다.

그러다보니 기동하는데 오래걸린다.  (참고로 default 8090 port로 띄움)

 

띄우고나서 확인, 아까는 500 ERROR였는데 200 OK 뜬다. 

$ wget http://localhost:8090
--2019-07-29 04:19:04--  http://localhost:8090/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8090... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://localhost:8090/login.action?os_destination=%2Findex.action&permissionViolation=true [following]
--2019-07-29 04:19:04--  http://localhost:8090/login.action?os_destination=%2Findex.action&permissionViolation=true
Reusing existing connection to localhost:8090.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html

index.html                                     [ <=>                                                                                    ]  19.64K  --.-KB/s   in 0.002s 

2019-07-29 04:19:04 (11.1 MB/s) - ‘index.html’ saved [20107]

 

잘된다.

'기타' 카테고리의 다른 글

TCP 통계  (0) 2020.07.14
Kerberos Setup (KR)  (0) 2019.08.22
500 Error on Confluence Startup (KR)  (2) 2019.07.29
squid proxy  (0) 2018.12.19
reverse proxy 개념  (0) 2018.12.19
Subnet mask 개념  (0) 2018.11.14
  1. nblack00 2019.08.07 18:05 신고

    라즈베리파이에서 오는 좌표값을 데이터베이스로 전송해서 테이블을 짜야하는데 도저히 감이 잡히지 않아서요 ㅠㅠ 혹시 괜찮으시다면 저좀 도와주실수 있으세요 ㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠ?

    • Nj 2019.08.16 13:57 신고

      제가 했던 방식은 테이블을 사용하지 않았어요. redis를 사용해서 key/value 방식으로만 했었습니다.

      이것을 사용해서 redis에 값을 넣었습니다.
      - library: from redis_collections import List
      - key: rp3:시리얼번호:20190816
      - vlaue: 조도센서의 측정값

+ Recent posts