Hello, I'm noticing that my process (which is quite I/O intensive) is idle since a while.
I have a strange I/O pattern in the statistics (attached).
I'm wondering if there are I/O limits in place, akin to the "credits" in AWS. Is that the case?
Hello, I'm noticing that my process (which is quite I/O intensive) is idle since a while.
I have a strange I/O pattern in the statistics (attached).
I'm wondering if there are I/O limits in place, akin to the "credits" in AWS. Is that the case?
I'm wondering if there are I/O limits in place, akin to the "credits" in AWS. Is that the case?
Nope, netcup does not use any credit system or something.
Do you use a RS or a VPS?
It's a root server. Which I think is still virtualized, since I have access only to a fraction of the cores of the CPU (which is fine).
My process went fine for a few hours, than the i/o was doing nothing as you can see from the chart. I tried to restart the process (which was not hang, just extremely slow in i/o) and it was slow even after restart, so I can only assume the supervisor decided I've used enough i/o for the day.
RS are virtualized, this is true. The difference to VPS is, that the cpu cores are dedicated. But well no, with current information they aint dedicated - they are just less overprovisioned, so that the steal should be permanent below 3%.
But RS dont have dedicated I/O, so this could be really problematic to find the reason for your problems. The only persons who know whats fact are the netcup employees itself, but im quite sure the wont told you, since the support doesnt invest time in customers problems (this is the reason why netcup is that cheap).
So yeah... is there a possibility to run your software in the rescue system of the RS?
The netcup support does only accept evidendence which are created in their rescue system. So otherwise you wouldnt have a chance for help.
I've heard there is a limit on VPS products, which is 300 IOPS - to ensure everyone gets a fair share of the disks.
There shouldn't be a limit on RS products, but neither does Netcup guarentees any IOPS on RS products.
Could you tell us what kind of disks you have - SSD or SAS (G9 products only have SSD memory, G8 products have both).
If you have SSD storage I would contact the support and ask whether your server could be moved onto another host system.
Can you verify the IOPS in the system too? Sometimes the SCP gives bogus data.
I appreciate your effort in helping here.
Since I was time constrained, I returned the machine and moved the project elsewhere
For what it's worth, I think that it makes sense that there are still limitations in place. It's still a virtualized environment, probably as whoami said, the premium is just being less overprovisioned. As you can see my process did more than 300 iops sustained for more than 2 hours and I have been probably rate-limited. I know this is unlikely to be a software problem since it's running flawlessly on my machine and on another baremetal machine.
The product was an RS 4000 G9, therefore ssd-only. When I still had the server, I could see with `iotop` that the I/O was intermittent, just like the graph shows after 16:30, so I think in this case it can be considered reliable. I have no idea where the sinusoidal pattern comes from instead XD
Thanks again for the input.
Fair enough.
Having a similar situation with RS 8000 G9. The conclusion is that netcup does limit IOPS?
Having a similar situation with RS 8000 G9. The conclusion is that netcup does limit IOPS?
Yes, there will definitely be some kind of limit. There is no other way in such shared environments. The only question is whether these limits are communicated transparently or not. Probably not in this case. But given the prices, it's quite understandable.
Yes, there will definitely be some kind of limit. There is no other way in such shared environments. The only question is whether these limits are communicated transparently or not. Probably not in this case. But given the prices, it's quite understandable.
It's a pretty tough nut to crack. I ran some tests and the IOPS is good on netcup (even better than on the provider where I do not have the same issues), but for some reason I am encountering the exact situation OP is describing. Unfortunately it is not worth it for me to investigate further, so I will leave it at that.
Yes, there will definitely be some kind of limit. There is no other way in such shared environments. The only question is whether these limits are communicated transparently or not. Probably not in this case. But given the prices, it's quite understandable.
Maybe this (older) (machine translated) German discussion thread could be of interest? "Throttling of hard drive throughput"