› Forums › Easy FancyBox Pro › page pollution
- This topic has 6 replies, 2 voices, and was last updated 9 years ago by
bart.
-
AuthorPosts
-
-
1 September 2014 at 19:07 #3337
bart
ParticipantHi,
I’m just wondering without having dug into it myself…
After enabling FancyBox, at least by default it is just outputting all its code into every page that is getting rendered. At least I’m getting a lot of Javascript and also some CSS. Would it at all be possible to migrate this into some loaded CSS file or .js file?
The way it is right now it is not very pretty and also annoys me because I often debug my own HTML pages.
Regards, B.
-
1 September 2014 at 21:41 #3343
RavanH
KeymasterHi Bart,
The javascript and css you see in the page source (header) is there because that is dynamically generated depending on plugin options and (in the case of IE specific css) file paths. Placing this in the page source is actually faster than feeding it via a separate (dynamic) request.
All the javascript and css that does not need to be dynamic, like the script libraries and main stylesheet, are loaded from static files.
In the next release, the Internet Explorer 6, 7 and 8 specific style rules will not be included by default anymore. These browsers have become (I’m happy to say) uncommon enough and it will take away the bulk of the style rules from your page source.
Hope that helps 🙂
-
2 September 2014 at 01:20 #3342
bart
ParticipantOkay. Indeed the IE part is the most offending. It would not be hard for me to get rid of that.
Nevertheless, this code is dynamic only as per the options. Because, it gets included in every page, whether that page makes use of Easy FancyBox or not. And if this code is dynamic in an absolute sense, but rather static as soon as those options are fixed, why not write these codes out to a file that then gets dynamically linked?
Sure, it will be a bit slower. It would probably mean two extra HTTP requests to the server. But that’s only for the first page, after that it’s cached.
I really don’t see the point of including “dynamic” content that is not dynamic at all except in that it changes when the options change.
Well, it didn’t take more than 3 minutes to dump the IE part of Easy FancyBox. It took less than that to dump the IE part of my theme :D.
What do I care, I have a seething hatred for IE users :P.
Seriously, what person ever was the first to think that IE was a good idea? I have not used that program for more than 2 hours over the past 10 years.
But anyway, from the viewpoint of my site, this code is NOT dynamic :). It is identical from page to page. Then, its generation should not happen at page-load, but rather at an earlier point in time. Just my point of view.
-
2 September 2014 at 17:48 #3341
RavanH
Keymasterwhy not write these codes out to a file that then gets dynamically linked?
Because it is more difficult to maintain across different installations/setups. Server environment (response headers for re-validation, etag etc., write permissions…), WordPress directory locations, single-/multisite installation etc. Plus, the caching that you desire will cause many users to come to me complaining that their changed settings ‘do not work’.
However, I’ve been considering a sort of ‘half way’ approach. Similar to what the Custom CSS module in Jetpack uses. It might be implemented in the near future…
-
2 September 2014 at 19:46 #3340
bart
ParticipantHmm, I don’t understand everything of that, although write permissions is obvious.
I’m not even entirely sure about my own host, but I know cURL can write out a cookie file to any location I desire, so it would probably work for me. I do believe WordPress directory locations would not every be more difficult than knowing the location of the installed plugin, since your plugin needs to know where its files are anyway.
Also, you would not write out the CSS and JS to a statically named file. You would make these two files with a dynamic or unique part to it. You would save the names to the WordPress options table, or something of the kind. That options table always gets cached (by WP) which I discovered much to my dismay as I tried to manually rearrange some category IDs but WP actually caches a term hierarchy list that you need to specifically update (or delete) or your changes won’t show. Not exactly the same thing, but performance issues in getting that data won’t be a problem and you are probably already reading from that table. Your plugin then inserts the CSS/JS links it gets from that table which is why I called it “dynamically linked”. Any browser caching is then mitigated.
I guess there could be many WP installations where the only file update mechanism is through FTP, there is a MySQL db that has write permissions and it ends there.
I don’t think multi-site would be more complex than it already is. A multi-site can share the same code while having differing configurations for each site? Causing file-based configuration to be very troublesome. Nevertheless that only means you would see these semi-random files (semi-randomly named files) being stored in that single plugin location. Multi-site seems to be so complex that I never intend to use it myself. But I figure WordPress then takes care of having distinct tables for each site, which means the config of each site (for the plugin) is automatically distinct from site to site. So basically you only need write permissions to your plugin folder.
I have checked into some of those issues. Writing your plugin folder is probably a problem. WordPress can write into themes, plugins and uploads. But the plugins themselves are then not group-writable..
However…
If you create a PHP-generated JS/CSS the way Jetpack seems to do…
if ( isset( $_GET['custom-css'] ) ) { header( 'Content-Type: text/css', true, 200 ); header( 'Expires: ' . gmdate( 'D, d M Y H:i:s', time() + 31536000) . ' GMT' ); // 1 year Jetpack_Custom_CSS::print_css(); exit; }
…and if you then parametrize that script using a token that is generated on every options-save, then there will be no browser caching for each new token url. You just store the token with the options. The script that outputs the link into the page is aware of the token, as is the script that receives the parameter (that generates the CSS en JS). Both are run-time generated (except for the token itself). Having random tokens also takes care of multi-site if need be. You can then issue browser-caching for the script output and since these URLs are always fixed in terms of the output they generate, response header management is also very simple. Basically, if a browser inquires about changes, you always output the same:
header(‘HTTP/1.1 304 Not Modified’);
From https://www.mnot.net/cache_docs/:
If a resource (especially a downloadable file) changes, change its name. That way, you can make it expire far in the future, and still guarantee that the correct version is served; the page that links to it is the only one that will need a short expiry time.
The token is really only required for caching and perhaps for multi-site differentiation (but not necessarily so).
I mean, this seems like a perfect solution to me? That way you don’t need to deal with “HTTP Etags” which seem to be overly complex. Cache validation and responses are extremely simple.
Then, the CSS/JS generation is still run-time, it just gets cached. Almost nothing changes except that it is sourced through one or two additional HTTP requests that get subsequently cached by the browser. It seems perfect. It is even extremely simple for myself to implement if I want to do that. It would probably not take me more than an hour or maybe two. If you want, I can even do it for you so you can just check the result and see if it is any good. It would probably not even require more than say 20-30 lines of code.
Kudos, B.
-
6 September 2014 at 02:28 #3339
bart
ParticipantHey, my apologies. I didn’t mean to start making choices for you :p.
I only wanted to say that I think the Jetpack approach might be a good thing, and that perhaps using a random token would alleviate some or all of the problems you have identified.
And that my personal belief and impression is that a real good solution IS actually very much possible.
That was all I would have needed to say really (regular text smiley) :S.
Regards, Bart.
ps. I don’t seem to be able to edit my post, otherwise I would have tidied it up a bit….
Regards again..
-
6 September 2014 at 15:43 #3338
RavanH
KeymasterNo problem Bart, I appreciate your thoughts 🙂
I only wanted to say that I think the Jetpack approach might be a good thing…
I’m not completely convinced yet. The “how / in what way” is not some much the issue as is the “why”.
You must consider that the javascript that is inserted is not adding much rendering time as opposed to the server taking extra requests. Many WordPress themes and plugins already add sooooo many extra requests.
So it really boils down to additional source rendering time versus extra request and response time. Server versus visitor location and server resources like CPU, memory, max concurrent requests etc. come into play here. Every case is different and there is no ‘one fits all’ solution.
Plus, when you install a caching plugin like WP Super Cache, W3 Total Cache (or any of the others) or if you’re using a server cache mechanism (like Nginx Fast CGI Cache) then the additional redering time will no longer count for cached responses.
Plus, on most websites most visitor will only open one page, only some browse to one more page before leaving again. Very rarely will a visitor browse more than 3 or 4 pages on the same site and it is only in the latter cases that the advantage of browser cache comes into play.
… and that perhaps using a random token would alleviate some or all of the problems you have identified.
The problem with query strings is that some proxy servers and server cache mechanisms will not cache these.
You see all the “buts” there 😉
-
-
AuthorPosts
- You must be logged in to reply to this topic.