• Support Call: +1 5618596340
  • Email: info@bestwebdesigncompany.info
Product added


En Kateli Cami Halıları

cami halısı fiyatları

Modern cami halısı mimari ile inşa edilmiş yeni dönem camilerinde çok daha büyük alanlar kullanıldığından dolayı bir renk sabitliği ve düzen oluşturabilmek adına cami halısı seçiminde belirli ölçülere dayanılarak kolay temizlenebilen uygulanabilen halılar seçilmektedir.. Cami halısı seçiminde farklı seçenekler arasından yapabileceğiniz seçimler sayesinde camilerde yapıya uygun olarak istenilen renk ve boyutlarda zemin döşemesi gerçekleştirebilirsiniz. Ülkemizde bulunan Selçuklu ve Osmanlı mimarilerinin görüldüğü tarihi eser niteliğindeki camilerde yapının estetiğine uygun özellikteki yün veya akrilik türlerdeki cami halıları seçilebilir.

Cami Halısı Fiyat Farkları

Halı Cenneti ürettiği kaliteli, uzun ömürlü, sağlam, hepsi birbirinden güzel desen ve renklerdeki, diz izi yapmayan, tüy ya da toz bırakmayan, anti-statik, anti-bakteriyel ve insan sağlığına uyumlu halılarıyla ülkemizde cami halısı ve yurt halısı sektöründe sıklıkla tercih edilen bir markadır. Üstelik yurt içi pazarında oldukça ilgi gören Halı Cenneti, ürettiği kaliteli, uzun ömürlü, göz dolduran,farklı desen ve renk seçeneklerine sahip cami halılarını yurt dışı pazarına da sunmuş, burada da oldukça yoğun bir ilgiyle karşılaşmıştır.. Halı Cenneti müşteri memnuniyeti ve güvenine her şeyden çok önem verilmektedir. Bu sebeple tüm halılarımız halının bir kere alınıp, yıllarca kullanılan bir eşya olduğunun bilinciyle kaliteli malzemeden ve uzun yıllar dayanıklılığını koruyacak şekilde üretilmektedir.Ayrıca “Sonsuz Desen ve Renk” ilkesiyle, birbirinden güzel cami halısı desenleri, yurt halısı desenleri, sınır tanımayan renk seçenekleri ile cami halısı ve yurt halısı sektörlerinde çığır açmıştır.

Cami Halısı Çeşitleri

Cami halılarımızda kolay yıkanabilirlikle beraber yıkama sonrasında renk solmaması çok önemlidir. İstediğiniz ebat ve ölçüde cami halısı üretebilen Halı Cenneti, müşteri memnuniyeti merkezli çalışmaya özen gösterir. İster yerinde, iterseniz de kaldırıp başka yerde Cami Halıları yıkayabilirsiniz. Cami Halısı alırken dikkat edilecek en önemli unsurlardan biri de ödeme seçeneğidir. Temizleme açısından kolaylık sağlayan cami halılarımız Türkiye’nin her yerine kolayca montajı yapılmaktadır. Cami Halısında özel bir tasarım düşünürseniz, iki hafta gibi bir süre zarfında montaja hazır hale getirebilmekteyiz. Kolay temizlenmekle beraber çabuk deforme olmayan Cami halılarımız solmayan renkli özel ipliklerle üretilmektedir. Cami halısının üretmenin bir uzmanlık olduğunu farkında olan Halı Cenneti ödeme konusunda oldukça kolaylıklar sağlamaktadır..

cami halısı fiyatları istanbul

Camilerde ibadet edilirken dinimizin usullerine göre ve Peygamber Efendimizin verdiği hadislere dayanarak temizliğe önem gösterilmeli ve temiz bir şekilde cami halısı kullanılmalıdır. Cami halıları kullanımı tarihten bu yana Türkiye’nin tüm camilerinde bir gelenek haline gelmiş ve önemli bir kültür olmuştur. Bu nedenle firmamızın sunduğu halı ürünleri en uygun renklere ve desenlere sahip ürünler içermektedir.. Halılar birer zemin döşemesi olarak cami zeminlerinde hem estetik oluşturan hem de rahat bir ibadet yapılmasını sağlayan ihtiyaçlardır.

Sitemize Gidin


Craigslist takes down ability to insert HTML tags in listings, starting in Florida

While we’re still investigating how far this extends geographically and categorically, at least in Florida, Texas, Mass. and New York, and at least for rental and real estate classifications, HTML / rich-media features for Craigslist listings were severely curtailed this week, without notice. It seems to have begun on Monday in Southern Florida, spreading to Orlando and through the entire state by Wednesday.

Today, Friday, it seems to have spread elsewhere, so may well be on its way nationwide and for all categories in which any item is for sale – automotive, real estate, furniture, appliances, and so forth. The Craigslist HTML guidelines would suggest that to be the case, stating that “ IMG, FONT, TABLE, DIV, and SPAN tags are no longer supported in the for sale categories. Please use CL image upload for images.”

Those HTML tags let web designers display inset photos, change typographic attributes beyond Craigslist’s own design standards and reformat listings to their own liking.

It might not have been a gradual rollout, but simply one that only tech-savvy and highly-observant folks interested in specific classifications happened to notice in their classifications and locale, while others have yet to pick up on it. We are grateful to Peter Schuh, CEO and founder of real estate and rental service firm ShowMojo, for alerting us to the issue and clarifying what it all meant. ShowMojo offers a service for rental property owners and managers, leasing agents, and real estate agents and brokers, that helps them showcase and manage their listings on rental and real estate sites. Its flagship Scheduler Button is part of the ShowMojo template that uploads to listing ads, allowing prospective renters and buyers to go to online any time of day or night and schedule property showings. It was an integral and effective part of ShowMojo users’ Craigslist advertising – until Craigslist made its HTML change.

Nor do these changes effect only ShowMojo clients. We talked with another Realtor, who uses Listing-To-Leads as his automated template for ad placement. While we were on the phone he attempted to place a Craigslist ad and got the message that the HTML tags were no longer supported for this category.

“No pre-made templates are going to work for anyone,” Schuh told the AIM Group. “The codes don’t work at all. We lost the Scheduler Button and it’s not recoverable.”

The biggest problem is for those who haven’t picked up about the issue and simply are not aware that their ad links aren’t working, their logos aren’t showing, and so forth. Schuh said, however, that ads placed prior to the change seem to be working as before. It’s just when people edit them, renew them or place new ads after the tag changes that the problem arises. We’re still investigating. While we’ve inquired of Craigslist we’ve had no response so far.

ShowMojo actually put out a press release saying they quickly found a solution – not quite as snazzy, but workable. “We discovered how to get a clickable link back on Craigslist,” Schuh said.

The graphics below will show a pre-change Craigslist rich-media ad with Scheduler Button (on the left), followed by the post-change work around, with links out. Click on each to enlarge for better viewing. Do watch the tongue-in-cheek ShowMojo “Hurricane Craig” video.

It’s a shame that advertisers haven’t all picked up on changes that might be hurting ad response. Of course, Craigslist has a right to enforce its own HTML standards. All it would have taken was a blog post or press release. What happened to the community service Craigslist board members so often espouse?

The audacious plan to end hunger with 3-D printed food


Anjan Contractor’s 3D food printer might evoke visions of the “replicator” popularized in Star Trek, from which Captain Picard was constantly interrupting himself to order tea. And indeed Contractor’s company, Systems & Materials Research Corporation, just got a six month, $125,000 grant from NASA to create a prototype of his universal food synthesizer.

But Contractor, a mechanical engineer with a background in 3D printing, envisions a much more mundane—and ultimately more important—use for the technology. He sees a day when every kitchen has a 3D printer, and the earth’s 12 billion people feed themselves customized, nutritionally-appropriate meals synthesized one layer at a time, from cartridges of powder and oils they buy at the corner grocery store. Contractor’s vision would mean the end of food waste, because the powder his system will use is shelf-stable for up to 30 years, so that each cartridge, whether it contains sugars, complex carbohydrates, protein or some other basic building block, would be fully exhausted before being returned to the store.

Ubiquitous food synthesizers would also create new ways of producing the basic calories on which we all rely. Since a powder is a powder, the inputs could be anything that contain the right organic molecules. We already know that eating meat is environmentally unsustainable, so why not get all our protein from insects?

If eating something spat out by the same kind of 3D printers that are currently being used to make everything from jet engine parts to fine art doesn’t sound too appetizing, that’s only because you can currently afford the good stuff, says Contractor. That might not be the case once the world’s population reaches its peak size, probably sometime near the end of this century.

“I think, and many economists think, that current food systems can’t supply 12 billion people sufficiently,” says Contractor. “So we eventually have to change our perception of what we see as food.”
There will be pizza on Mars


The ultimate in molecular gastronomy. (Schematic of SMRC’s 3D printer for food.)SMRC

If Contractor’s utopian-dystopian vision of the future of food ever comes to pass, it will be an argument for why space research isn’t a complete waste of money. His initial grant from NASA, under its Small Business Innovation Research program, is for a system that can print food for astronauts on very long space missions. For example, all the way to Mars.

“Long distance space travel requires 15-plus years of shelf life,” says Contractor. “The way we are working on it is, all the carbs, proteins and macro and micro nutrients are in powder form. We take moisture out, and in that form it will last maybe 30 years.”

Pizza is an obvious candidate for 3D printing because it can be printed in distinct layers, so it only requires the print head to extrude one substance at a time. Contractor’s “pizza printer” is still at the conceptual stage, and he will begin building it within two weeks. It works by first “printing” a layer of dough, which is baked at the same time it’s printed, by a heated plate at the bottom of the printer. Then it lays down a tomato base, “which is also stored in a powdered form, and then mixed with water and oil,” says Contractor.

Finally, the pizza is topped with the delicious-sounding “protein layer,” which could come from any source, including animals, milk or plants.


The prototype for Contractor’s pizza printer (captured in a video, above) which helped him earn a grant from NASA, was a simple chocolate printer. It’s not much to look at, nor is it the first of its kind, but at least it’s a proof of concept.

source: http://qz.com/86685/the-audacious-plan-to-end-hunger-with-3-d-printed-food/



Welcome to the One-Screen World


As screens get increasingly getting cheaper and more ubiquitous, are we going to keep counting them?

Not too long ago, I was asked to give a presentation on the state of digital media and how well brands are intersecting the worlds of marketing and technology. Prior to my closing keynote, there was a panel discussion about the state of media. One senior media executive was discussing the power of “a four screen world.” I thought that he had made a mistake. I was familiar with the concept of three screens (television, computer and mobile), but four screens was something new. Eventually, he unveiled that the fourth screen was the tablet.

It’s still somewhat shocking to think that the iPad was first introduced on April 3rd, 2010, and we now live in a world where Apple is selling more iPads than any PC manufacturer is selling of their entire PC line. This has been a steadily growing trend since 2012. And yet this is the fourth screen?

The basic dilemma for marketers is this: there are now too many screens to count. Set aside PCs, tablets, smartphones, and TVs (connected or otherwise), for a moment. Your car, your thermostat, your washer and dryer, your refrigerator are all on their way to being “smart” as well — connected to the internet and to each other, featuring screens that offer up all sorts of information, from usage data to content, like a fridge that suggests recipes based on the food stored inside.

This means the future is not about three screens or four screens or fourteen screens. It’s about one screen: whichever screen is in front of me. In a world where screens are connected and everywhere, the notion of even counting them seems arbitrary, at best. If you don’t believe me, speak to somebody currently sporting Google Glass.

At the same time that screens are proliferating, they’re also integrating.

My niece is nineteen years old. When she was sixteen, she would come home from school, take out her laptop, plop down on the couch, lift the computer lid, turn on the TV, plug in her iPod earbuds, and set her BlackBerry down next to her. From afar, it looked like she was running NORAD. But fast-forward a mere three years, and now she comes home from school, takes out her iPad… and that’s it.

All of that core content is now readily available on one screen. From content (in text, images, audio, and video) to communications (chatting with friends on Skype or via Google Hangouts), it’s all there on this one device that rules them all.

This convergence is happening because, no matter how many screens you buy, you only have one pair of eyes. Yes, we are seeing a massive uptick in consumers who are using companion devices (meaning, they are watching TV but have their smartphones nearby), and while the industry does refer to it as a companion device, the truth is that you’re not watching the television with one eyeball and tweeting on your iPhone with the other. You’re seeing one screen at a time.

Welcome to the one-screen world.

Here we are, today, with over a billion smartphones in the world. They outnumber the PCs. Fifteen percent of online retail sales will take place this year via mobile devices, according to eMarketer, and that’s a 56% increase from 2012. Within the next decade, virtually all mobile phones will be smartphone, meaning six billion people will be constantly connected. We already live in a world where more individuals have a mobile subscription than access to safe drinking water.

And yet, according to a recent survey by Adobe, 45% of marketers say their firms still don’t have a mobile presence. Businesses are still splitting hairs of what is the web, what is the smartphone, what is the tablet, and what is TV. Instead of hunkering down and figuring out what the customer’s new expectations are when everything from their washer and dryer to their television and smartphone are hyper-connected to one another, most marketers are just worrying about how they’re going to advertise on a mobile screen. Advertising? That’s not the revolution here. Now, brands don’t just advertise on someone else’s mobile site, they can build their own apps, tools, and programs of engagement that make mobile a different kind of media. They can create value through offering a mobile service or app that is truly useful. They can touch their consumers in ways that are both contextual and location-aware. This is the proverbial “last mile” that all marketers were hoping for: contextual, personal, and by location.

If ever there was a time to embrace the notion of the one-screen world, this would be it. Increasingly, consumers are rolling these screens up into one. They’re streaming video from their tablets and laptops to their TVs. They’re watching TV shows on their phones. They simply want the content they like on the device they prefer, when they want it.

The rise of mobile gives marketers a tremendous opportunity to rethink what their jobs really are. Don’t send me a coupon or bombard me with ads for the latest washing machine; don’t blast me with a text message while I’m in a department store’s appliance center. Create an app that lets me control my washing machine, so I can start my washing on my way home from the office, so it’s not sitting wet all day in the washer.

Remember, at the end of the day, your customers only have one pair of eyes, and they’re only looking at one screen: the one that interests them.

Binary Matrix Security

The Rise of Everyday Hackers

Veracode released it’s annual report of State of Software Security, an  research on software vulnerability trends and predictions in how these flaws could be exploited if left unaddressed.

The research suggest a rise on “everyday hackers” caused by the availability of information. Which makes it possible for less technically skilled  hackers to take advantage of relatively simple vulnerabilities like SQL injections


“Despite significant improvements in awareness of the importance of securing software, we are not seeing the dramatic decreases in exploitable coding flaws that should be expected,” – said Chris Eng, vice president of research, Veracode.

The study found out that most of security breaches and data loss situations is insecure software.  Approximately 70 percent of the software failed to comply with enterprise security policies.


“The amount of risk an organization accepts should be a strategic business decision – not the aftermath of a particular development project,” — Chris Wysopal, co-founder and CTO, Veracode.



PostgreSQL 9.2.4, 9.1.9, 9.0.13 and 8.4.17 released

The PostgreSQL Global Development Group has released a security update to all current versions of the PostgreSQL database system, including versions 9.2.4, 9.1.9, 9.0.13, and 8.4.17. This update fixes a high-exposure security vulnerability in versions 9.0 and later. All users of the affected versions are strongly urged to apply the update immediately.

A major security issue fixed in this release, CVE-2013-1899, makes it possible for a connection request containing a database name that begins with “-” to be crafted that can damage or destroy files within a server’s data directory. Anyone with access to the port the PostgreSQL server listens on can initiate this request. This issue was discovered by Mitsumasa Kondo and Kyotaro Horiguchi of NTT Open Source Software Center.

Two lesser security fixes are also included in this release: CVE-2013-1900, wherein random numbers generated by contrib/pgcrypto functions may be easy for another database user to guess, and CVE-2013-1901, which mistakenly allows an unprivileged user to run commands that could interfere with in-progress backups. Finally, this release fixes two security issues with the graphical installers for Linux and Mac OS X: insecure passing of superuser passwords to a script, CVE-2013-1903 and the use of predictable filenames in /tmp CVE-2013-1902. Marko Kreen, Noah Misch and Stefan Kaltenbrunner reported these issues, respectively.

We are grateful for each developer’s efforts in making PostgreSQL more secure.

This release also corrects several errors in management of GiST indexes. After installing this update, it is advisable to REINDEX any GiST indexes that meet one or more of the conditions described below.

This update release also contains fixes for many minor issues discovered and patched by the PostgreSQL community in the last two months, including:

  •     Fix GiST indexes to not use “fuzzy” geometric comparisons for box, polygon, circle, and point columns
  • Fix bugs in contrib/btree_gist for GiST indexes on text, bytea, bit, and numeric columns
  • Fix bugs in page splitting code for multi-column GiST indexes
  • Fix buffer leak in WAL replay causing “incorrect local pin count” errors
  • Ensure crash recovery before entering archive recovery during unclean shutdown when recovery.conf is present
  • Avoid deleting not-yet-archived WAL files during crash recovery
  • Fix race condition in DELETE RETURNING
  • Fix possible planner crash after adding columns to a view depending on another view
  • Eliminate memory leaks in PL/Perl’s spi_prepare() function
  • Fix pg_dumpall to handle database names containing “=” correctly
  • Avoid crash in pg_dump when an incorrect connection string is given
  • Ignore invalid indexes in pg_dump and pg_upgrade
  • Include only the current server version’s subdirectory when backing up a tablespace with pg_basebackup
  • Add a server version check in pg_basebackup and pg_receivexlog
  • Fix contrib/dblink to handle inconsistent settings of DateStyle or IntervalStyle safely
  • Fix contrib/pg_trgm’s similarity() function to return zero for trigram-less strings
  • Enable building PostgreSQL with Microsoft Visual Studio 2012
  • Update time zone data files for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas

As always, update releases only require installation of packages and a database system restart. You do not need to dump/restore or use pg_upgrade for this update release. Users who have skipped multiple update releases may need to perform additional, post-update steps; see the Release Notes for details.

Source: http://www.postgresql.org/about/news/1456/

Developing for Mobiles

Over the last couple of years, we’ve seen mobile development become an increasing part of our work at ThoughtWorks. A habitual question is how to deal with the many kinds of mobile devices that are out there. Recently I published an infodeck on developing software for multiple mobile devices. This explores the dangers of a naive cross-platform approach, explores the trade-offs between multiple native apps versus a web app, and looks into hybrid approaches.

In a complementary article Giles Alexander writes about how to allocate effort across different platforms . He outlines two opening gambits – laser focuses on doing a single platform really well while cover-your-bases maximizes the number of platforms to aim at. He talks about the choice between these openings and how to build on them. Giles is also the maintainer of Calatrava – an open-source framework to assist building hybrid mobile applications.

Source: http://martinfowler.com/


How Google Retooled Android With Help From Your Brain

When Google built the latest version of its Android mobile operating system, the web giant made some big changes to the way the OS interprets your voice commands. It installed a voice recognition system based on what’s called a neural network — a computerized learning system that behaves much like the human brain.

For many users, says Vincent Vanhoucke, a Google research scientist who helped steer the effort, the results were dramatic. “It kind of came as a surprise that we could do so much better by just changing the model,” he says.

Vanhoucke says that the voice error rate with the new version of Android — known as Jelly Bean — is about 25 percent lower than previous versions of the software, and that this is making people more comfortable with voice commands. Today, he says, users tend to use more natural language when speaking to the phone. In other words, they act less like they’re talking to a robot. “It really is changing the way that people behave.”

It’s just one example of the way neural network algorithms are changing the way our technology works — and they way we use it. This field of study had cooled for many years, after spending the 1980s as one of the hottest areas of research, but now it’s back, with Microsoft and IBM joining Google in exploring some very real applications.

When you talk to Android’s voice recognition software, the spectrogram of what you’ve said is chopped up and sent to eight different computers housed in Google’s vast worldwide army of servers. It’s then processed, using the neural network models built by Vanhoucke and his team. Google happens to be very good at breaking up big computing jobs like this and processing them very quickly, and to figure out how to do this, Google turned to Jeff Dean and his team of engineers, a group that’s better known for reinventing the way the modern data center works.

Neural networks give researchers like Vanhoucke a way analyzing lots and lots of patterns — in Jelly Bean’s case, spectrograms of the spoken word — and then predicting what a brand new pattern might represent. The metaphor springs from biology, where neurons in the body form networks with other cells that allow them to process signals in specialized ways. In the kind of neural network that Jelly Bean uses, Google might build up several models of how language works — one for English language voice search requests, for example — by analyzing vast swaths of real-world data.

“People have believed for a long, long time — partly based on what you see in the brain — that to get a good perceptual system you use multiple layers of features,” says Geoffrey Hinton, a computer science professor at the University of Toronto. “But the question is how can you learn these efficiently.”

Android takes a picture of the voice command and Google processes it using its neural network model to figure out what’s being said.

Google’s software first tries to pick out the individual parts of speech — the different types of vowels and consonants that make up words. That’s one layer of the neural network. Then it uses that information to build more sophisticated guesses, each layer of these connections drives it closer to figuring out what’s being said.

Neural network algorithms can be used to analyze images too. “What you want to do is find little pieces of structure in the pixels, like for example like an edge in the image,” says Hinton. “You might have a layer of feature-detectors that detect things like little edges. And then once you’ve done that you have another layer of feature detectors that detect little combinations of edges like maybe corners. And once you’ve done that, you have another layer and so on.”

Neural networks promised to do something like this back in the 1980s, but getting things to actually work at the multiple levels of analysis that Hinton describes was difficult.

But in 2006, there were two big changes. First, Hinton and his team figured out a better way to map out deep neural networks — networks that make many different layers of connections. Second, low-cost graphical processing units came along, giving the academics had a much cheaper and faster way to do the billions of calculations they needed. “It made a huge difference because it suddenly made things go 30 times as fast,” says Hinton.

Today, neural network algorithms are starting to creep into voice recognition and imaging software, but Hinton sees them being used anywhere someone needs to make a prediction. In November, a University of Toronto team used neural networks to predict how drug molecules might behave in the real world.

Jeff Dean says that Google is now using neural network algorithms in a variety of products — some experimental, some not — but nothing is as far along as the Jelly Bean speech recognition software. “There are obvious tie-ins for image search,” he says. “You’d like to be able to use the pixels of the image and then identify what object that is.” Google Street View could use neural network algorithms to tell the difference between different kinds of objects it photographs — a house and a license plate, for example.

And lest you think this may not matter to regular people, take note. Last year Google researchers, including Dean, built a neural network program that taught itself to identify cats on YouTube.

Microsoft and IBM are studying neural networks too. In October, Microsoft Chief Research Officer Rick Rashid showed a live demonstration of Microsoft’s neural network-based voice processing software in Tianjin, China. In the demo, Rashid spoke in English and paused after each phrase. To the audience’s delight, Microsoft’s software simultaneously translated what he was saying and then spoke it back to the audience in Chinese. The software even adjusted its intonation to make itself sound like Rashid’s voice.

“There’s much work to be done in this area,” he said. “But this technology is very promising, and we hope in a few years that we’ll be able to break down the language barriers between people. Personally, I think this is going to lead to a better world.”

  • Meet Wikipedia, the Encyclopedia Anyone Can Code
  • Meet Wikipedia, the Encyclopedia Anyone Can Code

Meet Wikipedia, the Encyclopedia Anyone Can Code

It began as the encyclopedia anyone can edit. And now it’s also the encyclopedia anyone can program.

As of this weekend, anyone on Earth can use Lua — a 20-year-old programming language already championed by the likes of Angry Birds and World of Warcraft — to build material on Wikipedia and its many sister sites, such as Wikiquote and Wiktionary. Wikipedia has long offered simple tools that let tens of thousands of volunteer editors reuse little bits of text across its encyclopedia pages, but this is something different.

“We wanted to provide editors with a real programming language,” says Rob Lanphier, the director of platform engineering at the Wikimedia Foundation, the not-for-profit that oversees the online encyclopedia. “This will make things easier for editors, but it will also be significantly faster.”

It’s yet another way that the art of programming is slowly trickling down from the elite technicians of the world to the Average Joe. Companies such as Codecademy are actively looking to teach all sorts of programming skills to everyone and their brother. Google, MIT, and others are building new languages that significantly simplify how software code is built. And the web makes it so easy to put the appropriate tools in your hand. Wikipedia — the most successful crowd-sourced site on the net — is the extreme example.

According to the Wikimedia Foundation, over 84,000 people edit Wikipedia or its sister sites at least five times a month. Not all of them are coders, and certainly, not all of them know Lua. But the new tools will turn them into Lua coders — or at least some of them.

“We’re not evangelical about turning everyone into a coder,” says Lanphier. “But it certainly would make our lives easier if they were.”

Indeed, Lanphier and Wikipedia embraced Lua because their old tools were slowing things down. Previously, editors used things called templates to reuse material on multiple pages across the site. The information box that shows up on the right-hand side of George Peppard’s biography? That’s based on a template. So too are the little “citation needed” tags that annotate so many Wikipedia articles. These did the job, but as they piled up — and editors used them to do things they weren’t designed to do — they put a serious drag on the editing process.

If you were editing a page like the one on Hawaii congressional representative Tulsi Gabbard, Wikimedia says, you would need a good 30 seconds to redraw it and reload it. “Templates became more and more complicated over the years,” Lanphier explains. “The template language evolved into something like a programming language, but it was never designed to a be a programming language.”

So, the Foundation moved to Lua, a language created in 1993 by a group of computer science professors in Brazil. Lua is a scripting language, meaning it’s relatively easy to use and it’s specifically designed to automate the execution of oft-repeated tasks. It’s widely use in the online gaming community. The massively multiplayer game World of Warcraft, for instance, lets you customize its interface with Lua.

Wikimedia chose Lua because it’s specifically designed for embedding code amidst other things and because it lets site administrators carefully control how that code is executed. The code runs in a sandbox — meaning it’s designed not to interfere with the stuff around it — and it provides detailed controls for limiting how much computing power it can use. “We’re able to constrain things such that we don’t have to worry about an author accidentally — or on purpose — changing an article in such a way that it brings down our servers. We can limit how much CPU time any one given script can use.”

Why not use JavaScript, the web’s standard scripting language? Lanphier says that Lua’s CPU and memory controls will do a better job at keeping Wikipedia’s servers from becoming overloaded. “That’s Lua’s bread and butter,” he says. Certainly, Lua isn’t nearly as popular as JavaScript, but many of the same concepts apply. And as Lanphier explains, anyone can teach themselves to program in Lua simply by looking at sample code embedded in an existing article.

Wikipedia doesn’t just provide the programming tools. In a way, it also shows you how to use them.