Developer, author, musician, global domination theoretician
@nitinanuj Then you'll need about 30,000 cores to handle those requests, so it's not something I'd be too worried about.
2 months, 2 weeks ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@JaimieSirovich @kschroeder @Jason Yang @petewarnock I don't know exactly what goes on behind the scenes, but the thread safety adds a significant amount of overhead. Enough so that even Microsoft said to not use it on IIS and use FastCGI instead.
3 months, 3 weeks ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@JaimieSirovich @Jason Yang @petewarnock That is the conclusion that my testing supports. That said, threads would not be better than processes because then you would need to use ZTS and your performance would tank.
@garet1 @kschroeder @Gopalakrishna Palem That is not true. Neither mod_php nor php-fpm cache opcodes by default. It is internal to the Zend Engine itself and has no bearing on which web server you are using.
5 months ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@Gopalakrishna Palem It's a completely different test. I was testing *PHP* throughput.
@onmountain Yep, you can deploy to non-Zend PHP installations.
5 months, 1 week ago on Zend Server is proprietary. NNNOOOOO!!!!
@DimaSoltys If it's on the local host I will always connect via Unix socket if it's available.
6 months, 1 week ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@akaMrJohn @Gwyneth Llewelyn The data seems to support that assertion. One additional qualification would be that Apache is protected via load balancer or CDN reverse proxy from serving frontend requests. The concurrency restrictions of the prefork MPM are still pertinent.
6 months, 4 weeks ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@ocelikdemir I don't have them since it required a code change and I needed to revert back. What I did was add code to the end of Mage_Core_Model_App::dispatchEvent() that took the results from memory_get_usage() and did file_put_contents(<filename>, $data, FILE_APPEND), ran it through awk to build a CSV file and then opened the CSV file in Excel and created the graphs.
7 months, 2 weeks ago on How much memory does Magento use?
@DIREKTSPEED I should also mention that the point of this was not to see which is faster, but to address _common_claims_about_nginx_against_apache_. People say "Nginx is faster" and I say "ehhhhh, not so fast." That is it.
8 months, 3 weeks ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@DIREKTSPEED You don't need to get your panties in a bunch. This was testing the typical Apache scenario against the typical Nginx scenario for PHP. The event MPM for Apache is irrelevant to this discussion. It will not be any faster with PHP because PHP then will need to use PHP-FPM for Apache in exactly the same manner as you would with Nginx. This post was about PHP, not static content.
@gokulmig I did a similar test on Gluster a while back http://www.eschrade.com/page/testing-glusterfs-for-magento/
10 months ago on More – The file system is slow
Actually, employers have difficulty finding PHP developers. If you know PHP well you will probably get a job.
And I defined what "effective" meant. It can get the required job done quicker and with less resources than other options. (yes, I expect that this will be argued with as much anecdotal evidence as can be mustered)
But note the actual point of the article instead of bickering over the pointless banality that happens on so many tech blogs. Google was surprised that their implementation of PHP on the App Engine was as popular as it was. My point is that it should not be surprising given what PHP has and is doing. Given that Google has the best data concerning the web, their surprise must be based on something OTHER than data. THAT is my point.
1 year ago on Google finally acknowledges that PHP exists
@indy2kro @blacksonic I'm not sure why this is funny. You specifically state that it may be true for open source projects, i.e., publicly available platforms. I was not talking about private applications. Note the paragraph before. I was talking about Wordpress. Note the paragraph after. I was talking about Magento, Wordpress, Joomla, etc. I made no claim that Magento was the biggest PHP project.
@shayfalador I redid the test while watching vmstat and IOWait time was zero. System time was at 3%. I re-ran the test with 3 concurrent processes and IOWait time touched on 2 once and system time was at 8. Most of the time spent was in the logger userland code (65%).
Disks have a bad reputation as being slow, and they are... when they are functioning as memory, such as swapping. However, when disks are being used as they should be (persistent storage) I have seen very few instances where disk speed, itself, was the actual problem.
1 year ago on How much does logging affect performance?
@shayfalador There are a couple of things wrong with your assertions. First of all, hard disks being "slow" really depends on what you are comparing it to. I did a test the other day on my local drive in a VM and got about 43MB per second for writes, or 45,088,768 bytes. The logged element in my example was 140 bytes long. I would need about 300,000 writes per second for the drive interface to be saturated, simulating about 3000 requests per second. And that is on a desktop machine with an old 7200RPM hard drive. That hardly is problematic from a "scale" perspective.
A more realistic scenario that would require 100 log writes per request is that this would be a request of a moderate to complex application which would take several hundred milliseconds to run. A web server running that kind of application will not be serving 1000 requests per second, unless it is a VERY high powered machine at which point my hard drive numbers would be significantly higher because you don't have a single 7200RPM drive on a machine like that.
What it basically comes down to is that your assertion of 1000ths of a second per request be a lot is wrong when it is put into the context of an application that would require 100 log events per second.
And when you are working with an application of this kind of complexity you will be flying blind in your production environment when you are trying to figure out why something is not behaving properly. 1/1000th of a second price for a request that takes several hundred milliseconds is peanuts when compared to the insight you can get.
I would venture to say that Nginx with FastCgi would be faster
1 year ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@edushyant My test was just with the opcode cache and not the data cache and so no configuration in Magento was necessary. That said, Optimizer+ from Zend Server had an APC compatibility layer built into it. I believe it is still there but I don't know offhand.
1 year ago on Magento Performance on PHP 5.3, 5.4 and 5.5RC3
@EricHerrmann2 Requests per second
1 year, 1 month ago on Magento Performance on PHP 5.3, 5.4 and 5.5RC3
@Obdurodon Oh, and I should also mention that the reason why I did this test was because I ran into a customer who was using GlusterFS instead of NFS and I wanted some data to see how it compared. So I wasn't just pulling it randomly out of a stack of possible solutions.
1 year, 3 months ago on Testing GlusterFS for Magento
@Obdurodon Thanks for the info. FS-Cache is one of the things I want to take a look at. For the type of workloads that I see I wouldn't need consistency so much as I need cache invalidation. Many Magento users use NFS as the base to store user files such as product images. If someone saves one of those images it is not the end of the world if a few milliseconds of delay occurs while the cache is invalidated on multiple machines. There are multiple solutions that can be implemented to handle that, but for most Magento customers they represent an infrastructure complexity that many merchants shouldn't take on. So, for the scenario that I tend to work in, working out of the box so that I can have files cached across multiple machines is one that I would prefer.
That said, I really do like what has been done with GlusterFS. While it doesn't solve the problem I'm trying to solve it's got some really neat things in it.
@vinai Yep. Did a lot of copying and pasting from there.
1 year, 3 months ago on EAV Properties for Magento
@henrylearn2rock I'm pretty sure, yes. Most, if not all, modern operating systems have disk block caching. The test is easy. Do a code loop writing to the file system and another one reading. if they are vastly different then the OS is caching.
1 year, 4 months ago on For the last time, the file system is not slow!!
@dragooni It is a feature of the kernel and not the file system. Therefore it would be available to all file systems (I'm pretty sure this is true). It can be "bypassed" by passing the O_DIRECT option to open() in C which allows the application to directly control physical reads and writes.
@mkevac Yes, but if you are running PHP-FPM with NginX you still have the same problem. Even though NginX can handle 10k connections does not mean that you want 10k PHP processes running in the background to handle the requests. So while your assertion is true it's an apples/oranges comparison. For static content, absolutely NginX is the best server. However, for running PHP loads, Apache handles the request just a little more efficiently.
1 year, 4 months ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
@tubalmartin @Eugene OZ Every time someone uses shell_exec() for async operations a kitten gets a tummy ache.
I've not tried that. Do you have any benchmarks?
1 year, 5 months ago on Why you should not use .htaccess (AllowOverride All) in production
If you have no access to httpd.conf then your options are pretty limited. That said, I don't know what your requirements are for running a shared host, but I have the cheapest Linode VM which gives me full access for $20 a month and I love it.
If throughput is a concern. As we saw in the benchmark, disabling AllowOverride caused a 40% improvement in performance for static files. There is also the inherent security issue of allowing items in your document root change the configuration of your web server on the fly.
Technically AllowOverride IS the alternative. The preference would be to keep config settings in httpd.conf.
My thought would be to copy the superglobals to a backup variable and then overwrite them after the opcodes, objects and variables have been copied.
1 year, 5 months ago on Would this be a dumb idea for PHP core?
Additionally, 70% of all successful attacks come from inside an organization. Having a configurable value a) requires you to manage the key, and b) is something that an internal attacker may have knowledge of. Using a large pseudo-random number requires no configuration management and is not known by an internal individual. Defense in Depth, baby!
1 year, 5 months ago on Generating secure cross site request forgery tokens (csrf)
Could you explain why hashing values that are relatively easy to figure out is better than a pseudo random number generator?
There are parts of token generation that, on a basic level, do fall into the realm of cryptography since cryptography is about "writing secrets". Beyond that the link to crypto is simply that the cryptographic tooling does a better job of providing more, better, pseudo-random values.
When we're talking about predictability it will depend on which function we're talking about. If you have a timestamp, uniqid() is actually pretty easy to guess. It was designed to be unique, not unpredictable. And mt_rand() isn't so much predictable as it has a significantly smaller pool of values to choose from. In other words, mt_rand() is good, but openssl_random_pseudo_bytes() is better.
Cryptos (κρυπτός) and graphein (γράφειν) just means "secret writing". When we're generating a token what we want to do is give a secret to the person on the web page that will be extremely difficult to predict. The examples that I've found tend to rely on uniqid() which is based off of the time and, thus, predictable. So when you're thinking about cryptography you are probably thinking about the actual act of encryption, which is not what we're talking about. We are using the tool from one of the first steps in the chain for creating an "unpredictable" value.
The 32 bytes (256 bits) of data give us 1.1579208923731619542357098500869e+77 values, which is a pretty big set of values for you to use and so I doubt that you would deplete entropy.
However, mt_rand() returns an integer, not a series of bytes. That means that you have only 4 billion or so numbers to choose from. Compared to that other huge number, I would choose the latter.
It uses that as an example for generating a token, but that page also specifically states that it is based off of microtime. Because of that the value would be predictable.
...I should say a *significant* loss in security.
Thanks. That's a good point. In other words, using md5() or sha512 is not as important as getting the actual random bits. The hashing, itself, is really only there to make sure that the bits that come out do not break the format. One could almost say that when using openssl_random_pseudo_bytes() you could use md5(), hash_hmac() or base64_encode() without a loss of security, something that would not be possible to say about uniqid().
Sounds like the files were not actually pushed to the container. I would suggest that you contact support on the Get Satisfaction page for PHPCloud. I don't work for Zend anymore and so they may have changed things since I last worked on it.
1 year, 6 months ago on Connecting to the Zend Developer Cloud using NetBeans for PHP
It went pretty well. I have a second one coming up as soon as I can get it scheduled. We'll see how the next one goes.
1 year, 6 months ago on You gotta know when to fold ‘em
Yep. Used FPM
1 year, 6 months ago on Why is FastCGI /w Nginx so much faster than Apache /w mod_php?
... in other words, yes, memory utilization is more efficient with NginX. But one of the claims I heard was that NginX was faster, which turned out not to be true. With memory being cheap these days, memory usage should not be a primary factor for determining the server to use. The arguments _for_ NginX can be made quite easily without having to go to secondary arguments.
I was using Apache 2.2. But let's not go too far here. What I claimed was that PHP was faster on Apache. I have heard several times that this is the case. I wanted to figure out why and found that assertion was actually wrong. I wasn't talking about memory and I wasn't talking about static files. NginX is faster on a raw performance test. By at least an order of magnitude, due to its event based architecture.
Personally, I would still recommend using NginX with FastCGI for PHP even though it is slower than Apache. For a mixed media site, the additional performance of static files more than makes up for the slow-down with FastCGI. And if it's an API-based site (only PHP-based content) NginX will handle transient loads better (as well as denial of service attacks).
I did not. I might see what that does differently. But since both servers react to an accept() *system* call to process the request (and do not manage the handshake themselves) it is unlikely that it will make much of a difference.
@lparthad Means that there was an error in the format of your amf_config.ini file.
1 year, 9 months ago on Flex and Zend Framework – Part 1
@ecolinet I take back what I said about Codiqa. it doesn't seem to support remote data sources.
2 years ago on Phonegap and Bootstrap not lovers?
@ecolinet There was a reason for it, though I don't quite recall what that reason was. Perhaps it was along the lines of that I was finding that using native HTML5 transitions were giving me what I wanted. However, I just saw Codiqa and I must saw I'm intrigued again. But I won't be implementing it in this app. I am using JQuery for doing things like managing the DOM, just not using the mobile functionality.
@dstockto ... did I seriously??? Van Dammit, is what I meant. Will change
2 years, 1 month ago on Single User OAuth using Zend Framework’s Twitter Service Class
@Vladas Dirzys JSLint actually wasn't giving me an error
2 years, 1 month ago on JSON parsing error in function return
@stm Beats me. I just know that when I moved the start of the JSON declaration to the same line as the return keyword that it worked.