Monday, January 13, 2020

linux - How to SSH to ec2 instance in VPC private subnet via NAT server




I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. So, there is a NAT server in public subnet which forward all outbound traffic from private subnet to outer network.



Currently, I can SSH from public subnet to private subnet, also SSH from NAT to private subnet.
However, what I want is SSH from any machine(home laptop, office machine and mobile) to instances in private subnet.



I have done some research that I can setup the NAT box to forward SSH to instance in private subnet. But I got not luck for this.



Can anyone list what I need to setup to make this possible.



Naming are :




laptop (any device outside the VPC)



nat (the NAT server in the public subnet)



destination (the server in the private subnet which I want to connect to)



Not sure following are limitations or not:



The "destination" does not have a public IP, only a subnet ip, for example 10.0.0.1

The "destination" can not connect to "nat" via nat's public.
There are several "destination" servers, do I need to setup one for each?



Thanks


Answer



You can set up a bastion host to connect to any instance within your VPC:



http://blogs.aws.amazon.com/security/post/Tx3N8GFK85UN1G6/Securely-connect-to-Linux-instances-running-in-a-private-Amazon-VPC



You can choose to launch a new instance that will function as a bastion host, or use your existing NAT instance as a bastion.




If you create a new instance, as an overview, you will:



1) create a security group for your bastion host that will allow SSH access from your laptop (note this security group for step 4)



2) launch a separate instance (bastion) in a public subnet in your VPC



3) give that bastion host a public IP either at launch or by assigning an Elastic IP



4) update the security groups of each of your instances that don't have a public IP to allow SSH access from the bastion host. This can be done using the bastion host's security group ID (sg-#####).




5) use SSH agent forwarding (ssh -A user@publicIPofBastion) to connect first to the bastion, and then once in the bastion,SSH into any internal instance (ssh user@private-IP-of-Internal-Instance). Agent forwarding takes care of forwarding your private key so it doesn't have to be stored on the bastion instance (never store private keys on any instance!!)



The AWS blog post above should be able to provide some nitty gritty regarding the process. I've also included the below in case you wanted extra details about bastion hosts:



Concept of Bastion Hosts:
http://en.m.wikipedia.org/wiki/Bastion_host



If you need clarification, feel free to comment.


linux - CentOS: eror when removing file: "rm: cannot remove '.viminfo': No such file or directory"

I have a file named .viminfo in home directory. I can see that the file is there by ls -lh:



$ ls lh
...
drwxr-xr-x. 2 mt1022 1091 4.0K Oct 12 2016 .vim
-?????????? ? ? ? ? ? .viminfo
-rw-r--r--. 1 mt1022 1091 305 Nov 9 2013 .vimrc
...



However I cannot delete this file:



$ rm .viminfo
rm: cannot remove '.viminfo': No such file or directory


I saw somewhere that such files are corrrupted can be deleted by inode number. However, when I run ls -i I got the following output for the file:



145563901919042729 .cpan            144115239380596661 .vim     

145563901918974272 .cpanm ? .viminfo
145564136279985406 .dask 144115238810163333 .vimrc


I also tried sudo chmod g+x .viminfo (answer to a very similar post on this site). I still got no such file or directory error.



My question is How to delete such a corrupted file?







additional info that might be helpful:




  1. The file is stored on a lustre file system.

  2. The file was normal before and become corrupted after a recent sudden power outage.

  3. The file is not fixed during fsck.

redhat - Apache Config: RSA server certificate CommonName (CN) ... NOT match server name?

I'm getting this in error_log when I start Apache:




[Tue Mar 09 14:57:02 2010] [notice] mod_python: Creating 4 session mutexes based on 300 max processes and 0 max threads.
[Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `*.foo.com' does NOT match server name!?
[Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `www.bar.com' does NOT match server name!?

[Tue Mar 09 14:57:02 2010] [notice] Apache configured -- resuming normal operations


Child processes then seem to seg fault:




[Tue Mar 09 14:57:32 2010] [notice] child pid 3425 exit signal Segmentation fault (11)
[Tue Mar 09 14:57:35 2010] [notice] child pid 3433 exit signal Segmentation fault (11)
[Tue Mar 09 14:57:36 2010] [notice] child pid 3437 exit signal Segmentation fault (11)



Server is RHEL, what's going on and what do I need to do to fix this?



EDIT
As requested, the dump from httpd -M:




Loaded Modules:
core_module (static)
mpm_prefork_module (static)

http_module (static)
so_module (static)
auth_basic_module (shared)
auth_digest_module (shared)
authn_file_module (shared)
authn_alias_module (shared)
authn_anon_module (shared)
authn_default_module (shared)
authz_host_module (shared)
authz_user_module (shared)

authz_owner_module (shared)
authz_groupfile_module (shared)
authz_default_module (shared)
include_module (shared)
log_config_module (shared)
logio_module (shared)
env_module (shared)
ext_filter_module (shared)
mime_magic_module (shared)
expires_module (shared)

deflate_module (shared)
headers_module (shared)
usertrack_module (shared)
setenvif_module (shared)
mime_module (shared)
status_module (shared)
autoindex_module (shared)
info_module (shared)
vhost_alias_module (shared)
negotiation_module (shared)

dir_module (shared)
actions_module (shared)
speling_module (shared)
userdir_module (shared)
alias_module (shared)
rewrite_module (shared)
cache_module (shared)
disk_cache_module (shared)
file_cache_module (shared)
mem_cache_module (shared)

cgi_module (shared)
perl_module (shared)
php5_module (shared)
python_module (shared)
ssl_module (shared)
Syntax OK

Sunday, January 12, 2020

filesystems - EXT3 vs EXT4 vs XFS

Recently I read a lot about "new" file systems.



I checked some benchmarks that show MySQL working faster on EXT4 or XFS (and some other FS).



I also "found" that XFS and EXT4 are included in CentOS 5.X




However most of the articles I read speaks either very positively either very negatively on XFS. Same for EXT4.



Despite I have some Debian's that work on EXT4, I do not have experience with it.



Questions are - Is it safe?



If power stops - what will happen and what data could be lost?



If system crashes what will happen and what data could be lost?




If memory or some hardware (not the HDD controller or HDD) broke - what will happen and what data could be lost...

linux - Restrict root ssh from all but one IP/hostname

I'm wanting to restrict root ssh login coming from all but a single IP address.



I was under the impression that I just had to add this to /etc/pam.d/sshd:



account required pam_access.so



and this to /etc/security/access.conf:



-:root:ALL EXCEPT IPADDRESS


but that doesn't seem to be working.

centos - Access linux machine by hostname within LAN?

I have a LAN setup with a bunch of windows and linux boxes. The LAN is built on top of the AT&T DSL Router. I don't have any type of DNS Server running. All the windows machines can identify themselves by machine name over the network. Even a Linux NAS box can also be accessible by machine name. However, I recently built a CentOS linux box and I want it to be accessible by machine name. I've tried setting the hostname but it does not work. Can someone help me with this problem?

mod cache - Apache caching based on cookie

I'm trying to put mod_cache in front of my application server to cache "public" requests but not requests from logged-in users. For various reasons using alternate subdomains or paths isn't a viable option for me. I have the basics set up as:



# Expiry and cache-control
SetEnvIf Cookie "NOCACHE" no-cache
Header set Cache-Control "no-cache" env=no-cache
RequestHeader set X-FW-NoCache "on" env=no-cache
ExpiresActive On
ExpiresDefault "access plus 1 days"
#ExpiresByType text/html "now"
CacheEnable disk /

CacheRoot /var/cache/apache
CacheIgnoreHeaders Set-Cookie
#CacheIgnoreCacheControl on
#CacheIgnoreNoLastMod on
RewriteEngine On

# Search Engine Safe URL rewrite
# Redirect Coldfusion requests to index.cfm
# matches /file.mp4 but not /file:name.mp4 (ie; is a real file)
RewriteCond %{REQUEST_FILENAME} !/[^/:]+\.[^/:]{2,5}$

RewriteRule (.*) /index.cfm$1 [PT,L]


So if Apache sees the NOCACHE cookie it will always pass the request to the application server, even if it has it in cache. It mostly works but there's one issue that's causing me some grief.



If you visit the page without the cookie you will get a cached version with a future expiry date. If you then set the cookie and go back to that page the request is not sent because the browser has its own cached copy with a future expiry date.



How do I modify this so the browser always makes a request and the cache sends a 304 or cached copy WITHOUT asking the application server to reprocess it? In other words how do I tell the mem_cache to cache the file but not the client and downstream proxies?



I tried using ExpiresByType text/html "now" but then the cache wont cache it at all - even when CacheIgnoreCacheControl is on.




I also played around with CacheIgnoreNoLastMod but didn't have any luck finding a solution.

freebsd - Trying to "zfs attach" a new disk, how to get correct specification for the disk I'm adding?

I'm migrating data from my old server to zfs on FreeBSD 10.x (I'm actually on FreeNAS 9.10.2-u1 but doing this activity in console so it's pure FreeBSD). My problem is that zpool attach needs a new_device in the correct format or slice/partition information, which I don't know how to provide.



Because of costs, I'm migrating the data in two stages - copying the data from my old mirror to a new zfs pool (without redundancy), then breaking the mirrors on the old server to move the mirror drives over and resilver on the new server, at all stages having 2 copies of the data. SMART stats are all good, ands all disks are "enterprise" type. Although not ideal, so far it's gone well. I've copied over the data, and connected the disks from the old server to the new server - where I'm now stuck on getting the correct args for zpool attach.




Current storage is as follows:



camcontrol devlist identifies the disk devices and model numbers, giving:



ada0 = 6TB disk
ada1 = 4TB disk
ada2 = 6TB disk
ada3 = BOOT MIRROR
ada4 = BOOT MIRROR
ada5 = 4TB disk

ada6 = 6TB disk


glabel status identifies the gptid's for the 5 disks already in use:



gptid/c610a927-01da-11e7-b762-000743144400     ada0p2 - 6TB
gptid/c68f80ae-01da-11e7-b762-000743144400 ada2p2 - 6TB
gptid/3b2b904b-02b3-11e7-b762-000743144400 ada3p1 - BOOT MIRROR
gptid/fb71e387-016b-11e7-9ddd-000743144400 ada4p1 - BOOT MIRROR
gptid/c566154f-01da-11e7-b762-000743144400 ada5p2 - 4TB



zpool status identifies the 3 disks in the data pool so far, by gptid



gptid/c610a927-01da-11e7-b762-000743144400 (from above this is ada0p2, 6TB)
gptid/c68f80ae-01da-11e7-b762-000743144400 (from above this is ada2p2, 6TB)
gptid/c566154f-01da-11e7-b762-000743144400 (from above this is ada5p2, 4TB)


so the new disks to attach are:




ada1 (4TB) - attach to gptid/c566154f-01da-11e7-b762-000743144400 (ada5p2)
ada6 (6TB) - attach to gptid/c610a927-01da-11e7-b762-000743144400 (ada0p2)

disk arriving shortly (6TB): attach on arrival to gptid/c68f80ae-01da-11e7-b762-000743144400 (ada2p2)


Problem:



What I'm stuck on is the actual command to use for attach. zpool attach gives an error whatever I try:




zpool attach ada0p2 ada6
missing specification

zpool attach gptid/c610a927-01da-11e7-b762-000743144400 ada6
missing specification


I'm guessing it's objecting to the "ada6" and I should be providing some other identifier, or a slice/partition ID instead. But I don't have these; zfs creates them itself when it attaches the disk.




What is the correct command to use here, or what am I missing?

Saturday, January 11, 2020

postgresql - nginx / node.js / postgres, scalability problems?

I have an app running with:





  • one instance of nginx as the frontend (serving static file)

  • a cluster of node.js application for the backend (using cluster and expressjs modules)

  • one instance of Postgres as the DB



Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for:




  • 500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch).



  • 20000 users connected at the same time




Where could be the bottlenecks ?

Friday, January 10, 2020

vmware esxi - Storage vMotion fails with error 0xbad0060 (Necessary module isn't loaded)

We ran into the following problem: we added a new LUN to our small SAN when we upgraded from ESX 4.1 to ESXi 5.0. We wanted to move a number of VMs from one LUN to the other using storage vMotion. One of the reason for that was to make sure the VMs are safe when we'll upgrade from VMFS 4 to VMFS 5.




Unfortunately, we ran into the following error when we try to perform a storage vMotion:




A general system error occurred: Failed to initialize migration at source.
Error 0xbad0060. Necessary module isn't loaded.




The same error occur when trying a host vMotion.



Any idea what could cause this ?

Thursday, January 9, 2020

scalability - How many databases can SQL server express handle



I'm running a SQL EXPRESS 2005 server currently hosting ~50 databases. The databases serve clients' CMS/eCommerce websites. The connections are to a single instance, no user attached instances are being used. Median DB size is 5MB, the largest 20MB. The website are mostly low traffic, the CPU utilization is < 10%, and the SQL process uses at most 350MB RAM.
For now I'm well within the SQL server express limits of 1CPU/1GB RAM. In the upcoming expansion the number of databases may double. If I assume linear growth in requirements, the 1GB limit still wont be reached. But I'm concerned the number (> 100) of databases may become an issue. I'm not sure if this usage scenario is what Microsoft had in mind for express.
Is there any information or preferably real-world experience regarding SQL server express capability to handle lots of small databases? Can I expect it to run 150 databases, or should I start working on migrating to other database servers/file-based databases?


Answer




According to the SQL Server 2005 Express edition overview:




there are no limits to the number of
databases that can be attached to the
server.




So, the limit is how much performance you can utilise of the server. Consider that as the express edition will only use one CPU core, if you have a quad core processor it can not use more than 25%.




If you later on find that you need to utilise more of the server's performance, you can quite easily upgrade to a different version of SQL Server.


ssl - After setting up Elastic Load balancer my https doesn't work anymore. Nginx error



I have a regular instance that works fine when not behind a load balancer.
I set up an ELB with 80 forwarding to 80 and 443 forwarding to 443 and sticky sessions.
Afterward I receive this error when going to any https page.




The plain HTTP request was sent to HTTPS port


I handle the process of forcing https on certain pages in my nginx configuration.
What do I need to do to get this working? I'm putting in a barebones version of my nginx config below.



http {

include mime.types;
default_type application/octet-stream;


# Directories
client_body_temp_path tmp/client_body/ 2 2;
fastcgi_temp_path tmp/fastcgi/;
proxy_temp_path tmp/proxy/;
uwsgi_temp_path tmp/uwsgi/;

server {
listen 443;
ssl on;

ssl_certificate ssl.crt;
ssl_certificate_key ssl.key;

server_name www.shirtsby.me;
if ($host ~* ^www\.(.*)) {
set $host_without_www $1;
rewrite ^/(.*) $scheme://$host_without_www/$1 permanent;
}

location ~ ^/(images|img|thumbs|js|css)/ {

root /app/public;
}
if ($uri ~ ^/(images|img|thumbs|js|css)/) {
set $ssltoggle 1;
}
if ($uri ~ "/nonsecure") {
set $ssltoggle 1;
}
if ($ssltoggle != 1) {
rewrite ^(.*)$ http://$server_name$1 permanent;

}

location / {
uwsgi_pass unix:/site/sock/uwsgi.sock;
include uwsgi_params;
}

}

server {

listen 80;
server_name www.shirtsby.me;
if ($host ~* ^www\.(.*)) {
set $host_without_www $1;
rewrite ^/(.*) $scheme://$host_without_www/$1 permanent;
}

if ($uri ~ "/secure") {
set $ssltoggle 1;
}

if ($ssltoggle = 1) {
rewrite ^(.*)$ https://$server_name$1 permanent;
}

location ~ ^/(images|img|thumbs|js|css)/ {
root /app/public;
}

location / {
uwsgi_pass unix:/home/ubuntu/site/sock/uwsgi.sock;

include uwsgi_params;
}
}
}

Answer



Ended up getting the answer from vandemar in the nginx IRC Channel.
Seems pretty simple, but I struggled with figuring it out. The ELB was handling the SSL, I had already given it all the cert information. The problem was trying to handle it again on the individual instances or in the configuration file.
The solution is to just eliminate all the SSL stuff from the config file.
So removing these three lines fixed everything:




ssl                 on;
ssl_certificate ssl.crt;
ssl_certificate_key ssl.key;

centos - What does cron mail status 0x0047#012 Mean



I noticed in the mail log last night that I'm getting a new message:



MAIL (mailed XXX bytes of output but got status 0x0047#012)


The cron job did run successfully though (as it's a script that transmits to a third party API, and they confirmed that they received data), but I'm unable to see the status of the transmission on our end.



I'm thinking it might be related to amount of available disk space, but I have no way to be sure.




Here is the output of df-h



Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1 9.8G 9.7G 0 100% /
devtmpfs 1.9G 64K 1.9G 1% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb1 48G 6.7G 39G 15% /var/www



For reference, we're using Centos 6.6 on AWS



I tried looking online for the meaning of this message, but I was unable to find it. If anyone could shed some light on it, that would be great, thanks.



EDIT:



The answer marked as a dupe did not help me, as it's not related to my question and the user asking that question got a different error response.


Answer



So I got a hold of our system admin (we contract out to him, I'm just a dev at my company), and he said this was an issue with updating our AWS server. Basically we log to our /var/httpd folder since we have plenty of space there, but the update caused our pointers to go away. Here are the notes from him to help anyone in the future.




These notes relate to the issue in general, and the apache logs:



After a round of server updates last week, the apache logs were writing to the wrong location. This has been fixed and the logs are now properly writing to the /var/httpd volume again. We have the logs writing to the /var/httpd volume to keep it from suffocating the root volume. The root volume is 10GB and the /var/httpd volume is 50GB.



These notes are specific to the cron issue:



It was probably the root volume space issue. Mail servers write to queues which then send. If the volume was full then it couldn’t write to the queue.



I would still be interested in finding out where I can see a list of the status codes that cron uses, as that was my original question, and I can't seem to find this info. If I find this information out, I will update this answer with that.


Monday, January 6, 2020

django - Amazon EC2 Ami recommendations for free tier?



Amazon web services recently introduced a free tier, where you basically get free stuff to try out AWS and run tiny sites and projects. Basically it's free as long as you remain below a certain limit of bandwidth, disk storage etc.



Since going over the limits can quickly become quite expensive (for a hobbyist) I would like some recommendations or suggestions about which AMI's I can run on the free tier, for the purpose of trying out Ruby on Rails and/or Django.


Answer



Use the Amazon Linux AMI. It's the only AMI that's officially supported and maintained by Amazon. It's optimized for EC2 with the ec2-api-tools included, boots from EBS, and a package repository that's hosted on EC2. It also includes great features like CloudInit.




There's more info info in the Amazon Linux AMI User Guide.


linux - df shows negative values for used?



I have a CentOS 5.2 server and running df -h I get this:



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
672G -551M 638G 0% /
/dev/hda1 99M 12M 82M 13% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm



that space wasn't even near 10% usage the last time it showed a correct value. I'm at a loss with whats going on.



EDIT #1



Ok so I had to reboot the server because SSHD hanged up, I'm guessing it was related to this.



Some new info, after rebooting, df -h showed 12Gb used (2%), but if I run du -hcs / it shows 46Gb total, theres a big difference here.



EDIT #2




After about 15mins of uptime df -h is showing negative values again:



Filesystem            Size  Used Avail Use% Mounted on  
/dev/mapper/VolGroup00-LogVol00
672G -24G 660G - /


EDIT #3




More info, ran a fsck and this is the output:



Checking all file systems.
[/sbin/fsck.ext3 (1) -- /] fsck.ext3 -f -y /dev/VolGroup00/LogVol00
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VolGroup00/LogVol00: 204158/181633024 files (1.3% non-contiguous), 9224806/181633024 blocks

[/sbin/fsck.ext3 (1) -- /boot] fsck.ext3 -f -y /dev/hda1
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/boot: 34/26104 files (5.9% non-contiguous), 15339/104388 blocks

Answer



I think it is a file system corruption. You should unmount the partition and run a fsck.
Check also the logs and the console for any file system errors.



hardware - List of all IBM motherboard models with a certain socket?



I just got a really good deal on two Intel Quad Core Xeon L5420 processors and I have access to other deals on bare IBM servers (case+motherboard, no processor/ram/hdd). How can I easily find out what server or motherboard models will be compatible with this processor? I am ideally looking for a dual-processor motherboard and I see that it is socket LGA771.



So I guess the underlying question is, how can I find what IBM motherboards (and servers) have dual socket LGA 771?


Answer



Intel Xeon L5420 is known on IBM systems as 44E5133.



This compatibility matrix indicates that at least ibm x3650 supports this processor (as it is provided w/ model 7979)




Ibm says that it has been tested, and so it works. But maybe other server models with similar processors could accept the L5420.


Sunday, January 5, 2020

linux - Best file sharing protocol for Windows clients?



I want to share files on a Linux server with Windows 7 clients. I have a choice between multiple file sharing protocols: SMB/CIFS, FTP, WebDAV, NFS… question is: which one is the best for my needs?



Here are my criteria:




  • High performance on fast links (LANs), usable on slow links (WANs). Raw throughput is the most important, though high random performance would be nice (random read/writes, opening a lot of files…). On Gigabit LANs I want to be able to saturate my network link and I want it to feel like I'm using a local drive. On WANs I expect low overhead so it can accomodate high latency and make good use of the available network bandwidth.

  • Transparency for applications, i.e. mountable as a drive letter or close.


  • Security and firewall-friendly are bonus (as long as I can tunnel it over a VPN).



SMB/CIFS is slow over WANs, FTP doesn't seem very transparent, and it seems all Windows NFS clients are ugly and lack important functionality such as correct support for Unicode in file names. I didn't try WebDAV yet.



So, what's your stance on the subject? I'm not opposed to using two different protocols for LAN and WAN but I'd prefer to avoid it for usability reasons.


Answer



You only have a couple really good options. You are correct about SMB/CIFS over WAN, it is not the most efficient. The main benefit of going with SMB/CIFS, is to avoid a regular client/server architecture. The downside of having a decentralized architecture is inefficiency, which becomes more noticeable the more nodes connected. If you demand a decentralized setup over WAN/LAN, SMB/CIFS would be the only way to go. Also, SMB/CIFS over WAN is not recommended for security concerns.



I would prefer NFS in an all, or mostly, Linux LAN, Especially in a situation that is always connected, like shared home directory. NFS over WAN is nasty in regards to firewalls. It can be done and I have done it, but it is more then just opening a port. NFS really is a great choice when you want a NAS type setup.




If you are OK with a client/server model, I highly recommend WebDAV. You get automatically supported read (regular HTTP web-browser), easy firewalling with just one port (80 and/or 443), and solid performance.



FTP has its advantages, but over WAN you would want FTP with explicit SSL. FTPeS is newer, not all FTP clients supports it. Modern clients will, like a new copy of filezilla. But once again, firewalling is more then just popping a port open.



You really can't get more transparent then HTTP IMO. It's also what I do for my WAN/LAN, I even prefer it for just my regular LAN transfers.


Saturday, January 4, 2020

hp proliant - Moving disks from HP DL360g5 to DL380g6 with Windows 2008

I have DL360G5, with E200i array controller, 128MB cache + BBWC. It is running Windows 2008, as a member of the domain.
I have a total of 4 disks there: 2x72GB SAS, 2x300GB SAS, with two mirrors, one for system, one for data.
I'd like to move all those disks to a never DL380 G6, with P410i, 256MB cache + BBWC.
The disks itself should be compatible and P410i should recognize old logical volumes without problems.
The question is: will the Windows 2008 will be able to boot after swapping disks to DL380G6?

Friday, January 3, 2020

debian - Recover ext3 files from hard disk with bad sector

I have a folder of about 5GB that suddenly disappeared. When I checked its hard disk, I found out it has bad sector for about 2-3MB on this folder. Maybe it is on the folder's pointer.



The partition is EXT3 , and operating system is Debian.




I tried the fsck command , but it hasn't worked.



What should I do? How can I recover data? Any program or command?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...