Xenserver NVMe mount

xenserver 2017.09.11 16:00 |

출처 : http://www.shinli.com/?p=689

Adding an SSD/NVME Storage Device to Citrix XenServer

In the process of creating Virtual Machines (VMs) for labs for a customer, I needed to have 10VMs running at the same time.  My XenServer host uses a 1.5TB local SATA drive for VM storage and an iSCSI storage server.  I can’t use the storage server for these VMs as I will be taking my lab XenServer to the customer site and I don’t want to take two very heavy full tower servers.  After getting the eighth VM running, my XenServer host was begging for mercy.  The local SATA bus was being saturated with disk traffic.  Since I need to have 10 VMs running I needed a solution fast.  I ordered a Solid State Device (SSD) storage drive to put in the XenServer host.  Since I am not a Linux geek, I decided to document what I had to do to make the SSD drive available for exclusive use by XenServer 5.6 SP2.

From XenCenter, click on the Console tab and press Enter (Figure 1).

Figure 1

Note:  Thanks to my fellow CTP, Denis Gundarev, for his help with the following Linux commands.

From the console prompt, type fdisk –l (that is a lower case letter “L”).  This will list all the drives and partitions that XenServer sees (Figure 2).

Figure 2

My new SSD drive is shown as Disk /dev/sdb.  The original drive where XenServer is installed shows as Disk /dev/sda.  The external USB drive is 2TB and shows as Disk /dev/sdc .  The following commands will use my drive’s /dev/sdb designation.  The commands to type are in bold and comments about those commands are in square brackets [] following the commands.  You should not type the comments.

[root@XenServer1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won’t be recoverable.

The number of cylinders for this disk is set to 36481.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): [new partition]

Command action

e   extended

p   primary partition (1-4)

[make the partition a primary partition]

Partition number (1-4): [partition number 1]

First cylinder (1-36481, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-36481, default 36481):

Using default value 36481

Command (m for help): [change file system type]

Selected partition 1

Hex code (type L to list codes): 83 [83 is the Linux file system]

Command (m for help): [write partition table]

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@XenServer1 ~]# mkfs.ext3 /dev/sdb1 [format partition]      –  you can format disk to ext4 using command “mkfs -t ext4 /dev/sdb1”

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

36634624 inodes, 73258400 blocks

3662920 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=0

2236 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@XenServer1 ~]# xe sr-create type=ext shared=false device-config:device=/dev/sdb1 name-label=SSD [create Storage Repository]

Once the Storage Repository (SR) is created, it is available in XenCenter in theStorage tab (Figure 3).  Creating the new SR on the SSD drive took about 30 seconds.

Figure 3

Select the SSD SR in the Server View and click Add… (Figure 4).

Figure 4

Enter a Name, Description, Size, select SSD and click Add (Figure 5).

Figure 5

The new Virtual Disk appears in XenCenter with no VM assigned (Figure 6).

Figure 6

To verify the new storage repository and its new virtual disk are available to a VM, select any VM, click the Storage tab and click Add… (Figure 7).

Figure 7

The SSD SR shows as available to add a new virtual disk to the selected VM (Figure 8).

Figure 8

Click Cancel.  The new SSD based storage repository and its new virtual disk are ready for use.

After adding the SSD drive, I was able to create the other two VMs needed for the labs (Figure 9).  Now to test how much better performance for VM startup time is.  I started the Domain Controller, waited until the log in screen (87 seconds), started the SQL server, waited until the log in screen (37 seconds), selected all eight remaining VMs and clicked Start on the menu bar.  53-seconds later the last VM was at the log in screen.

 

Figure 9

 

Reference: http://carlwebster.com/adding-an-ssd-storage-device-to-citrix-xenserver/


저작자 표시
신고
Posted by 덕쑤

댓글을 달아 주세요

vmware python api

Virtualization 2017.08.18 17:19 |

https://www.jacobtomlinson.co.uk/vmware/2016/06/22/using-vmware-esxi-vsphere-python-api/

저작자 표시
신고

'Virtualization' 카테고리의 다른 글

vmware python api  (0) 2017.08.18
DevStack : OpenStack IceHouse install  (0) 2014.09.30
OpenStack  (0) 2014.09.30
클라우드 컴퓨팅  (0) 2014.09.30
가상화 (Virtualization)의 내부기술  (0) 2013.01.18
가상화(Virtualization)의 기본정보  (0) 2013.01.17
Posted by 덕쑤

댓글을 달아 주세요

성능

 

Apache (httpd)

Nginx

비고

총 커넥션 수

ServerLimit          2048
MaxClients          2048

(Tomcat
maxThreads) * (Tomcat instance ) * 0.9
=
connection

worker_processes  4;
events {
 worker_connections  1024;
}
worker_process * worker_connection =
connection

 

타임아웃

Timeout 10
Keepalive off

client_body_timeout 5s;
client_header_timeout 5s;
keepalive_timeout 5s;
send_timeout 5s;
resolver_timeout  5s;

 

GZIP 압축 전송

LoadModule deflate_module modules/mod_deflate.so
DeflateCompressionLevel 6
<
IfModule mod_deflate.c>
...
</
IfModule>
<
IfModule mod_headers.c>
...
</
IfModule>

gzip             on;
gzip_comp_level  5;
gzip_min_length  1000;
gzip_proxied     expired no-cache no-store private auth;
gzip_types       text/plain application/x-javascript text/xml text/css application/xml

레벨 범위
1~9
클수록
압축률
높음

Proxy buffer

ProxyReceiveBufferSize 2048

client_body_buffer_size 10k;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;

 

proxy keepalive

<Proxy "http://backend">
    ProxySet keepalive=On
</Proxy>

upstream http_backend {
   
keepalive 16;
}

 

Cache 유효기간

ExpiresActive On
<Files "*.jpg">
   
ExpiresDefault " access plus 3 month "
</Files>

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 365d;
}

정적 파일
기간 크면
성능 향상


보안

 

Apache (httpd)

Nginx

비고

Directory
접근 제한

<Directory "디렉토리">
    
AllowOverride None
    Order
allow,deny
    Allow from all
</Directory>

location /server-status {
    allow localhost;
    deny all;
}

 

Directory
검색 제한

<Directory "디렉토리">
     Options -Indexes
</Directory>

autoindex off;

 

에러페이지
redirect

ErrorDocument 500 /error.html
ErrorDocument 400 /error.html
ErrorDocument 404 /error.html
ErrorDocument 501 /error.html
ErrorDocument 503 /errorr.html

error_page 400 /error_page/400.html;
error_page 401 /error_page/401.html;
error_page 403 /error_page/403.html;
error_page 404 /error_page/404.html;
error_page 500 /error_page/500.html;

 

Server_tokens

ServerTokens Prod
ServerSignature Off

server_tokens off

서버설정노출방지

계정 설정

Nobody (권한 낮은 계정 사용)

Nobody (권한 낮은 계정 사용)

 

XST 공격 방지

TraceEnable Off

add_header X-XSS-Protection "1; mode=block";

 


WAS

연동

 

Apache (httpd)

Nginx

비고

Proxy

ProxyPass  /admin/   http://localhost:9000/
ProxyPass  /assets/    http://localhost:9010/

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

<Proxy balancer://{component}>
   
BalancerMember ajp://localhost:{portno1}
   
BalancerMember ajp://localhost:{portno2}
   
ProxySet lbmethod=bytraffi
</Proxy

<
VirtualHost * >
   
ServerName {domain.name}
   
ProxyPass / balancer://{component}/
   
ProxyPassReverse / balancer://{component}/
    ...
</
VirtualHost>

http {
    # upstream
으로 서버의 그룹을 지정할수 있다.
    #
upstream의 이름은 proxy_pass 등의 선언에서 참조할 수 있다.
   
upstream sevice-backend {
        server localhost:9001;
        # upstream
서버와 유지할 connection의 수를 keepalive로 설정
       
keepalive 100;
    }

    ...

    #
여러 path를 거쳐 서버에 들어오는 clinet ip를 얻어오기 위한 설정
   
set_real_ip_from  127.0.0.1/32;
    real_ip_header    X-Forwarded-For;

    server {
        listen 80;
        location / {
            ...
            include proxy.conf;
            proxy_pass http://sevice-backend:8080; # upstream
이름
       
}
}

LB 역할로
사용 여부에
따라서
구성 방법이
조금씩 다름

AJP

<VirtualHost * >
   
ServerName {domain.name}
   
ServerAlias {another.domain.name}
    Include
conf/workers.conf
</
VirtualHost>

JkMount /* tomcat
...

SetEnvIf Request_URI "/css/*" no-jk

JkMount /* tomcat
JkMountCopy All

<
VirtualHost *>
   
ServerName service.naver.com
    ..
</
VirtualHost>

+
workers.properties 용도에 맞게 설정

# ajp read timeout
ajp_read_timeout 600;

upstream sevice-backend {
    keepalive 512;
    keepalive 100;
}

server {
    location / {
        ajp_keep_conn on;
        ajp_pass apiServer;
    }
}

LB 역할로
사용 여부에
따라서
구성 방법이
조금씩 다름


기타

구성

 

Apache (httpd)

Nginx

비고

Log

ErrorLog "| /home1/irteam/apps/apache/bin/rotatelogs -l /home1/irteam/logs/apache/error_service1.log.%Y%m%d 86400"
CustomLog "| /home1/irteam/apps/apache/bin/rotatelogs -l /home1/irteam/logs/apache/access_service1.log.%Y%m%d 86400" combined env=!nolog-request

error_log  /usr/local/var/log/nginx/error.log;
error_log  /usr/local/var/log/nginx/error.log  notice;
error_log  /usr/local/var/log/nginx/error.log  info;

 

Redirection

RewriteEngine On
RewriteRule    ^/$    /    [PT,L]

RewriteCond %{REQUEST_URI} music
RewriteRule ^/(.*) http://music.naver.com [R]

server {
    listen 80;
    server_name service.naver.com;
    location /monitor/l7check {
        // 80
포트로 L7 health check를 하고 있다면  301을 반환해서는 안된다.
    }
    
location / {
        return 301 https://$server_name$request_uri;
    }
}

server {
    listen 443 ssl;
    ...
    ssl on;
    ssl_certificate  /home1/irteam/apps/nginx/conf/service_naver_com.crt.combined;
    ssl_certificate_key /home1/irteam/apps/nginx/conf/service_naver_com.key;
    ....
}

Nginx
redirection

예시는
HTTPS 설정과
동일한 예시임

SSL/TLS

  <IfModule mod_ssl.c>
    ...
        SSLProcotol all -SSLv2 -SSLv3
        SSLHonorCipherOrder on
        SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    </IfModule mod_ssl.c>

server {
    ...
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256;
}

 


uNginx
Worker_process * worker_connection = connection
보편적으로 worker_process cpu core 개수와 동일하게 맞춤
성능 측정 하며 최적의 값을 찾아야 함
u
u
uApache
prefork MPM 방식을 주로 사용
Prefork MPM
한 개의 쓰레드에, 여러 자식 프로세스. 1 process = 1 connection
Tomcat maxThread * Tomcat Instance = connection
ServerLimit(MaxClients) * 1.1(1.2) = connection

성능 측정 하며 최적의 값을 찾아야 함
uNginx
Timeout
Default : 60
DNS resolve : 30
Keep Alive
Default : 75
Backend 서버 keepalive 설정은, 성능 향상 효

uApache
Timeout
Default : 300
Keep Alive
Default : on
u공통
세션 유지 시간이 너무 길면, 서버 자원 부담이 커짐
불필요한 세션 유지로 인해, 소켓 리소스 부족 장애 위험 커짐
적절한 튜닝 필요
Nginx : Keep Alive 5/ Timeout : 5

Apache : Keep Alive off / Timeout : 10
uNginx
Gzip_comp_level
On / off 로 기능 사용 선택
1 ~ 9 로 압축률 설정
uApache
Mod_deflate, mod_headers
모듈 추가로 기능 사용

특정 컨텐츠 타입으로 압축 설정 가능
uNginx
Static File 에 대해 Cache Expires(만료) 기간 길게 지정
Ex) Image

uApache
성능, 보안 상 Issue
FileETag None
캐시 파일과, 원본 파일 비교 기능 off
Mod_cache
cacheIgnoreHeader 반드시 포함
Mod_exfire 모듈 사용하여, 만료 기간 지정
u공통

정적인 파일에 대한 Cache Expires 기간을 늘릴 시, 성능 향상
uNginx
Nginx.conf
디렉토리 리스팅 제한
Autoindex off
접근 제외 (deny all)
Server_token off
uApache
Httpd.conf
디렉토리 리스팅 제한
디렉토리 정보 주석 혹은 삭제
접근 제외 (deny all)
ServerSignature off
TraceEnable off
u공통
권한이 적은 계정 (nobody 사용 권장)

Error page redirection 설정
uNginx
Ajp
Proxy
uApache
Ajp(mod_jk)
Proxy

u공통
어떤 용도로 사용할 지에 따라 구성 방법이 다름
Load-balancer 용도

1:1 
uNginx
Access_log 경로; access_log on/off
Error_log 경로; error_log on/off
Location ~* 등의 설정으로, 필터링 가능

uApache
LogFormat
로그 형식 지정 가능
CustomLog
| 로그를 기록 경로
SetEnvIf 설정으로, 필터링 가능
u공통
로그 분석이 필요한 파일만 로깅 가능

이미지 등의 파일에 대한 로깅 제외 시, 성능 향상


저작자 표시
신고
Posted by 덕쑤

댓글을 달아 주세요