Developer, author, musician, global domination theoretician
@Obdurodon Oh, and I should also mention that the reason why I did this test was because I ran into a customer who was using GlusterFS instead of NFS and I wanted some data to see how it compared. So I wasn't just pulling it randomly out of a stack of possible solutions.
3 weeks, 2 days ago on Testing GlusterFS for Magento
@Obdurodon Thanks for the info. FS-Cache is one of the things I want to take a look at. For the type of workloads that I see I wouldn't need consistency so much as I need cache invalidation. Many Magento users use NFS as the base to store user files such as product images. If someone saves one of those images it is not the end of the world if a few milliseconds of delay occurs while the cache is invalidated on multiple machines. There are multiple solutions that can be implemented to handle that, but for most Magento customers they represent an infrastructure complexity that many merchants shouldn't take on. So, for the scenario that I tend to work in, working out of the box so that I can have files cached across multiple machines is one that I would prefer.
That said, I really do like what has been done with GlusterFS. While it doesn't solve the problem I'm trying to solve it's got some really neat things in it.
@vinai Yep. Did a lot of copying and pasting from there.
1 month, 2 weeks ago on EAV Properties for Magento
@henrylearn2rock I'm pretty sure, yes. Most, if not all, modern operating systems have disk block caching. The test is easy. Do a code loop writing to the file system and another one reading. if they are vastly different then the OS is caching.
My latest conversation: The First Annual Report on Programmer Ass-hattery
1 month, 2 weeks ago on For the last time, the file system is not slow!!
@dragooni It is a feature of the kernel and not the file system. Therefore it would be available to all file systems (I'm pretty sure this is true). It can be "bypassed" by passing the O_DIRECT option to open() in C which allows the application to directly control physical reads and writes.
1 month, 3 weeks ago on For the last time, the file system is not slow!!
@mkevac Yes, but if you are running PHP-FPM with NginX you still have the same problem. Even though NginX can handle 10k connections does not mean that you want 10k PHP processes running in the background to handle the requests. So while your assertion is true it's an apples/oranges comparison. For static content, absolutely NginX is the best server. However, for running PHP loads, Apache handles the request just a little more efficiently.
My latest conversation: No-.htaccess httpd.conf file for Magento
2 months ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@tubalmartin @Eugene OZ Every time someone uses shell_exec() for async operations a kitten gets a tummy ache.
I've not tried that. Do you have any benchmarks?
2 months, 2 weeks ago on Why you should not use .htaccess (AllowOverride All) in production
If you have no access to httpd.conf then your options are pretty limited. That said, I don't know what your requirements are for running a shared host, but I have the cheapest Linode VM which gives me full access for $20 a month and I love it.
If throughput is a concern. As we saw in the benchmark, disabling AllowOverride caused a 40% improvement in performance for static files. There is also the inherent security issue of allowing items in your document root change the configuration of your web server on the fly.
Technically AllowOverride IS the alternative. The preference would be to keep config settings in httpd.conf.
My thought would be to copy the superglobals to a backup variable and then overwrite them after the opcodes, objects and variables have been copied.
2 months, 3 weeks ago on Would this be a dumb idea for PHP core?
Additionally, 70% of all successful attacks come from inside an organization. Having a configurable value a) requires you to manage the key, and b) is something that an internal attacker may have knowledge of. Using a large pseudo-random number requires no configuration management and is not known by an internal individual. Defense in Depth, baby!
My latest conversation: Starting with Magento on Monday
3 months ago on Generating secure cross site request forgery tokens (csrf)
Could you explain why hashing values that are relatively easy to figure out is better than a pseudo random number generator?
There are parts of token generation that, on a basic level, do fall into the realm of cryptography since cryptography is about "writing secrets". Beyond that the link to crypto is simply that the cryptographic tooling does a better job of providing more, better, pseudo-random values.
When we're talking about predictability it will depend on which function we're talking about. If you have a timestamp, uniqid() is actually pretty easy to guess. It was designed to be unique, not unpredictable. And mt_rand() isn't so much predictable as it has a significantly smaller pool of values to choose from. In other words, mt_rand() is good, but openssl_random_pseudo_bytes() is better.
Cryptos (κρυπτός) and graphein (γράφειν) just means "secret writing". When we're generating a token what we want to do is give a secret to the person on the web page that will be extremely difficult to predict. The examples that I've found tend to rely on uniqid() which is based off of the time and, thus, predictable. So when you're thinking about cryptography you are probably thinking about the actual act of encryption, which is not what we're talking about. We are using the tool from one of the first steps in the chain for creating an "unpredictable" value.
The 32 bytes (256 bits) of data give us 1.1579208923731619542357098500869e+77 values, which is a pretty big set of values for you to use and so I doubt that you would deplete entropy.
However, mt_rand() returns an integer, not a series of bytes. That means that you have only 4 billion or so numbers to choose from. Compared to that other huge number, I would choose the latter.
It uses that as an example for generating a token, but that page also specifically states that it is based off of microtime. Because of that the value would be predictable.
...I should say a *significant* loss in security.
Thanks. That's a good point. In other words, using md5() or sha512 is not as important as getting the actual random bits. The hashing, itself, is really only there to make sure that the bits that come out do not break the format. One could almost say that when using openssl_random_pseudo_bytes() you could use md5(), hash_hmac() or base64_encode() without a loss of security, something that would not be possible to say about uniqid().
Sounds like the files were not actually pushed to the container. I would suggest that you contact support on the Get Satisfaction page for PHPCloud. I don't work for Zend anymore and so they may have changed things since I last worked on it.
My latest conversation: Setting max_input_time (with data!)
3 months, 4 weeks ago on Connecting to the Zend Developer Cloud using NetBeans for PHP