Most votes on algorithm questions 1
Read all the top votes questions and answers in a single page.
#1: What is a plain English explanation of "Big O" notation? (Score: 5096)
Created: 20090128 Last updated: 20160722
Tags: algorithm, complexitytheory, computerscience, bigo, timecomplexity
I’d prefer as little formal definition as possible and simple mathematics.
#1 Best answer 1 of What is a plain English explanation of "Big O" notation? (Score: 6813)
Created: 20090128 Last updated: 20201228
Quick note, this is almost certainly confusing Big O notation (which is an upper bound) with Theta notation “Θ” (which is a twoside bound). In my experience, this is actually typical of discussions in nonacademic settings. Apologies for any confusion caused.
Big O complexity can be visualized with this graph:
The simplest definition I can give for BigO notation is this:
BigO notation is a relative representation of the complexity of an algorithm.
There are some important and deliberately chosen words in that sentence:
 relative: you can only compare apples to apples. You can’t compare an algorithm that does arithmetic multiplication to an algorithm that sorts a list of integers. But a comparison of two algorithms to do arithmetic operations (one multiplication, one addition) will tell you something meaningful;
 representation: BigO (in its simplest form) reduces the comparison between algorithms to a single variable. That variable is chosen based on observations or assumptions. For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). This assumes that comparison is expensive. But what if the comparison is cheap but swapping is expensive? It changes the comparison; and
 complexity: if it takes me one second to sort 10,000 elements, how long will it take me to sort one million? Complexity in this instance is a relative measure to something else.
Come back and reread the above when you’ve read the rest.
The best example of BigO I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learned in school were:
 addition;
 subtraction;
 multiplication; and
 division.
Each of these is an operation or a problem. A method of solving these is called an algorithm.
The addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The ‘tens’ part of that number is carried over to the next column.
Let’s assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.
See the pattern? The complexity (being the number of operations) is directly proportional to the number of digits n in the larger number. We call this O(n) or linear complexity.
Subtraction is similar (except you may need to borrow instead of carry).
Multiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.
If we have two 100digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (10^{12}) multiplications and two million adds.
As the algorithm scales with nsquared, this is O(n^{2}) or quadratic complexity. This is a good time to introduce another important concept:
We only care about the most significant portion of complexity.
The astute may have realized that we could express the number of operations as: n^{2} + 2n. But as you saw from our example with two numbers of a million digits apiece, the second term (2n) becomes insignificant (accounting for 0.0002% of the total operations by that stage).
One can notice that we’ve assumed the worst case scenario here. While multiplying 6 digit numbers, if one of them has 4 digits and the other one has 6 digits, then we only have 24 multiplications. Still, we calculate the worst case scenario for that ‘n’, i.e when both are 6 digit numbers. Hence BigO notation is about the Worstcase scenario of an algorithm.
The Telephone Book
The next best example I can think of is the telephone book, normally called the White Pages or similar but it varies from country to country. But I’m talking about the one that lists people by surname and then initials or first name, possibly address and then telephone numbers.
Now if you were instructing a computer to look up the phone number for “John Smith” in a telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that you could guess how far in the S’s started (let’s assume you can’t), what would you do?
A typical implementation might be to open up to the middle, take the 500,000^{th} and compare it to “Smith”. If it happens to be “Smith, John”, we just got really lucky. Far more likely is that “John Smith” will be before or after that name. If it’s after we then divide the last half of the phone book in half and repeat. If it’s before then we divide the first half of the phone book in half and repeat. And so on.
This is called a binary search and is used every day in programming whether you realize it or not.
So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 20 times. In comparing search algorithms we decide that this comparison is our ‘n’.
 For a phone book of 3 names it takes 2 comparisons (at most).
 For 7 it takes at most 3.
 For 15 it takes 4.
 …
 For 1,000,000 it takes 20.
That is staggeringly good, isn’t it?
In BigO terms this is O(log n) or logarithmic complexity. Now the logarithm in question could be ln (base e), log_{10}, log_{2} or some other base. It doesn’t matter it’s still O(log n) just like O(2n^{2}) and O(100n^{2}) are still both O(n^{2}).
It’s worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:
 Best Case: In the telephone book search, the best case is that we find the name in one comparison. This is O(1) or constant complexity;
 Expected Case: As discussed above this is O(log n); and
 Worst Case: This is also O(log n).
Normally we don’t care about the best case. We’re interested in the expected and worst case. Sometimes one or the other of these will be more important.
Back to the telephone book.
What if you have a phone number and want to find a name? The police have a reverse phone book but such lookups are denied to the general public. Or are they? Technically you can reverse lookup a number in an ordinary phone book. How?
You start at the first name and compare the number. If it’s a match, great, if not, you move on to the next. You have to do it this way because the phone book is unordered (by phone number anyway).
So to find a name given the phone number (reverse lookup):
 Best Case: O(1);
 Expected Case: O(n) (for 500,000); and
 Worst Case: O(n) (for 1,000,000).
The Traveling Salesman
This is quite a famous problem in computer science and deserves a mention. In this problem, you have N towns. Each of those towns is linked to 1 or more other towns by a road of a certain distance. The Traveling Salesman problem is to find the shortest tour that visits every town.
Sounds simple? Think again.
If you have 3 towns A, B, and C with roads between all pairs then you could go:
 A → B → C
 A → C → B
 B → C → A
 B → A → C
 C → A → B
 C → B → A
Well, actually there’s less than that because some of these are equivalent (A → B → C and C → B → A are equivalent, for example, because they use the same roads, just in reverse).
In actuality, there are 3 possibilities.
 Take this to 4 towns and you have (iirc) 12 possibilities.
 With 5 it’s 60.
 6 becomes 360.
This is a function of a mathematical operation called a factorial. Basically:
 5! = 5 × 4 × 3 × 2 × 1 = 120
 6! = 6 × 5 × 4 × 3 × 2 × 1 = 720
 7! = 7 × 6 × 5 × 4 × 3 × 2 × 1 = 5040
 …
 25! = 25 × 24 × … × 2 × 1 = 15,511,210,043,330,985,984,000,000
 …
 50! = 50 × 49 × … × 2 × 1 = 3.04140932 × 10^{64}
So the BigO of the Traveling Salesman problem is O(n!) or factorial or combinatorial complexity.
By the time you get to 200 towns there isn’t enough time left in the universe to solve the problem with traditional computers.
Something to think about.
Polynomial Time
Another point I wanted to make a quick mention of is that any algorithm that has a complexity of O(n^{a}) is said to have polynomial complexity or is solvable in polynomial time.
O(n), O(n^{2}) etc. are all polynomial time. Some problems cannot be solved in polynomial time. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn’t, we couldn’t use the public key systems we use.
Anyway, that’s it for my (hopefully plain English) explanation of Big O (revised).
#1 Best answer 2 of What is a plain English explanation of "Big O" notation?(Score: 752)
Created: 20090128 Last updated: 20201011
It shows how an algorithm scales based on input size.
O(n^{2}): known as Quadratic complexity
 1 item: 1 operations
 10 items: 100 operations
 100 items: 10,000 operations
Notice that the number of items increases by a factor of 10, but the time increases by a factor of 10^{2}. Basically, n=10 and so O(n^{2}) gives us the scaling factor n^{2} which is 10^{2}.
O(n): known as Linear complexity
 1 item: 1 second
 10 items: 10 seconds
 100 items: 100 seconds
This time the number of items increases by a factor of 10, and so does the time. n=10 and so O(n)’s scaling factor is 10.
O(1): known as Constant complexity
 1 item: 1 operations
 10 items: 1 operations
 100 items: 1 operations
The number of items is still increasing by a factor of 10, but the scaling factor of O(1) is always 1.
O(log n): known as Logarithmic complexity
 1 item: 1 operations
 10 items: 2 operations
 100 items: 3 operations
 1000 items: 4 operations
 10,000 items: 5 operations
The number of computations is only increased by a log of the input value. So in this case, assuming each computation takes 1 second, the log of the input n
is the time required, hence log n
.
That’s the gist of it. They reduce the maths down so it might not be exactly n^{2} or whatever they say it is, but that’ll be the dominating factor in the scaling.
See also original question in stackoverflow
#2: How do I check if an array includes a value in JavaScript? (Score: 4346)
Created: 20081025 Last updated: 20201213
Tags: javascript, arrays, algorithm, timecomplexity, javascriptobjects
What is the most concise and efficient way to find out if a JavaScript array contains a value?
This is the only way I know to do it:
function contains(a, obj) {
for (var i = 0; i < a.length; i++) {
if (a[i] === obj) {
return true;
}
}
return false;
}
Is there a better and more concise way to accomplish this?
#2 Best answer 1 of How do I check if an array includes a value in JavaScript? (Score: 4784)
Created: 20090924 Last updated: 20201103
Modern browsers have Array#includes
, which does exactly that and is widely supported by everyone except IE:
console.log(['joe', 'jane', 'mary'].includes('jane')); //true
You can also use Array#indexOf
, which is less direct, but doesn’t require polyfills for outdated browsers.
console.log(['joe', 'jane', 'mary'].indexOf('jane') >= 0); //true
Many frameworks also offer similar methods:
 jQuery:
$.inArray(value, array, [fromIndex])
 Underscore.js:
_.contains(array, value)
(also aliased as_.include
and_.includes
)  Dojo Toolkit:
dojo.indexOf(array, value, [fromIndex, findLast])
 Prototype:
array.indexOf(value)
 MooTools:
array.indexOf(value)
 MochiKit:
findValue(array, value)
 MS Ajax:
array.indexOf(value)
 Ext:
Ext.Array.contains(array, value)
 Lodash:
_.includes(array, value, [from])
(is_.contains
prior 4.0.0)  Ramda:
R.includes(value, array)
Notice that some frameworks implement this as a function, while others add the function to the array prototype.
#2 Best answer 2 of How do I check if an array includes a value in JavaScript?(Score: 467)
Created: 20081025 Last updated: 20190807
Update from 2019: This answer is from 2008 (11 years old!) and is not relevant for modern JS usage. The promised performance improvement was based on a benchmark done in browsers of that time. It might not be relevant to modern JS execution contexts. If you need an easy solution, look for other answers. If you need the best performance, benchmark for yourself in the relevant execution environments.
As others have said, the iteration through the array is probably the best way, but it has been proven that a decreasing while
loop is the fastest way to iterate in JavaScript. So you may want to rewrite your code as follows:
function contains(a, obj) {
var i = a.length;
while (i) {
if (a[i] === obj) {
return true;
}
}
return false;
}
Of course, you may as well extend Array prototype:
Array.prototype.contains = function(obj) {
var i = this.length;
while (i) {
if (this[i] === obj) {
return true;
}
}
return false;
}
And now you can simply use the following:
alert([1, 2, 3].contains(2)); // => true
alert([1, 2, 3].contains('2')); // => false
See also original question in stackoverflow
#3: How can I pair socks from a pile efficiently? (Score: 4004)
Created: 20130119 Last updated: 20200423
Tags: algorithm, sorting, languageagnostic, matching
Yesterday I was pairing the socks from the clean laundry and figured out the way I was doing it is not very efficient. I was doing a naive search — picking one sock and “iterating” the pile in order to find its pair. This requires iterating over n/2 * n/4 = n^{2}/8 socks on average.
As a computer scientist I was thinking what I could do? Sorting (according to size/color/…) of course came to mind to achieve an O(NlogN) solution.
Hashing or other notinplace solutions are not an option, because I am not able to duplicate my socks (though it could be nice if I could).
So, the question is basically:
Given a pile of n
pairs of socks, containing 2n
elements (assume each sock has exactly one matching pair), what is the best way to pair them up efficiently with up to logarithmic extra space? (I believe I can remember that amount of info if needed.)
I will appreciate an answer that addresses the following aspects:
 A general theoretical solution for a huge number of socks.
 The actual number of socks is not that large, I don’t believe my spouse and I have more than 30 pairs. (And it is fairly easy to distinguish between my socks and hers; can this be used as well?)
 Is it equivalent to the element distinctness problem?
#3 Best answer 1 of How can I pair socks from a pile efficiently? (Score: 2496)
Created: 20130119 Last updated: 20171224
Sorting solutions have been proposed, but sorting is a little too much: We don’t need order; we just need equality groups.
So hashing would be enough (and faster).
 For each color of socks, form a pile. Iterate over all socks in your input basket and distribute them onto the color piles.
 Iterate over each pile and distribute it by some other metric (e.g. pattern) into the second set of piles
 Recursively apply this scheme until you have distributed all socks onto very small piles that you can visually process immediately
This kind of recursive hash partitioning is actually being done by SQL Server when it needs to hash join or hash aggregate over huge data sets. It distributes its build input stream into many partitions which are independent. This scheme scales to arbitrary amounts of data and multiple CPUs linearly.
You don’t need recursive partitioning if you can find a distribution key (hash key) that provides enough buckets that each bucket is small enough to be processed very quickly. Unfortunately, I don’t think socks have such a property.
If each sock had an integer called “PairID” one could easily distribute them into 10 buckets according to PairID % 10
(the last digit).
The best realworld partitioning I can think of is creating a rectangle of piles: one dimension is color, the other is the pattern. Why a rectangle? Because we need O(1) randomaccess to piles. (A 3D cuboid would also work, but that is not very practical.)
Update:
What about parallelism? Can multiple humans match the socks faster?
 The simplest parallelization strategy is to have multiple workers take from the input basket and put the socks onto the piles. This only scales up so much  imagine 100 people fighting over 10 piles. The synchronization costs (manifesting themselves as handcollisions and human communication) destroy efficiency and speedup (see the Universal Scalability Law!). Is this prone to deadlocks? No, because each worker only needs to access one pile at a time. With just one “lock” there cannot be a deadlock. Livelocks might be possible depending on how the humans coordinate access to piles. They might just use random backoff like network cards do that on a physical level to determine what card can exclusively access the network wire. If it works for NICs, it should work for humans as well.
 It scales nearly indefinitely if each worker has its own set of piles. Workers can then take big chunks of socks from the input basket (very little contention as they are doing it rarely) and they do not need to synchronise when distributing the socks at all (because they have threadlocal piles). At the end, all workers need to union their pilesets. I believe that can be done in O(log (worker count * piles per worker)) if the workers form an aggregation tree.
What about the element distinctness problem? As the article states, the element distinctness problem can be solved in O(N)
. This is the same for the socks problem (also O(N)
, if you need only one distribution step (I proposed multiple steps only because humans are bad at calculations  one step is enough if you distribute on md5(color, length, pattern, ...)
, i.e. a perfect hash of all attributes)).
Clearly, one cannot go faster than O(N)
, so we have reached the optimal lower bound.
Although the outputs are not exactly the same (in one case, just a boolean. In the other case, the pairs of socks), the asymptotic complexities are the same.
#3 Best answer 2 of How can I pair socks from a pile efficiently?(Score: 595)
Created: 20130120 Last updated: 20150514
As the architecture of the human brain is completely different than a modern CPU, this question makes no practical sense.
Humans can win over CPU algorithms using the fact that “finding a matching pair” can be one operation for a set that isn’t too big.
My algorithm:
spread_all_socks_on_flat_surface();
while (socks_left_on_a_surface()) {
// Thanks to human visual SIMD, this is one, quick operation.
pair = notice_any_matching_pair();
remove_socks_pair_from_surface(pair);
}
At least this is what I am using in real life, and I find it very efficient. The downside is it requires a flat surface, but it’s usually abundant.
See also original question in stackoverflow
#4: What does O(log n) mean exactly? (Score: 2311)
Created: 20100221 Last updated: 20200728
Tags: algorithm, timecomplexity, bigo
I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally…and the same goes for, for example, quadratic time O(n^{2}) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials.
For example, the following function is O(n) because the algorithm grows in proportion to its input n:
f(int n) {
int i;
for (i = 0; i < n; ++i)
printf("%d", i);
}
Similarly, if there was a nested loop, the time would be O(n^{2}).
But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)?
I do know (maybe not in great detail) what Logarithm is, in the sense that: log_{10} 100 = 2, but I cannot understand how to identify a function with a logarithmic time.
#4 Best answer 1 of What does O(log n) mean exactly? (Score: 2919)
Created: 20100221 Last updated: 20200101
I cannot understand how to identify a function with a log time.
The most common attributes of logarithmic runningtime function are that:
 the choice of the next element on which to perform some action is one of several possibilities, and
 only one will need to be chosen.
or
 the elements on which the action is performed are digits of n
This is why, for example, looking up people in a phone book is O(log n). You don’t need to check every person in the phone book to find the right one; instead, you can simply divideandconquer by looking based on where their name is alphabetically, and in every section you only need to explore a subset of each section before you eventually find someone’s phone number.
Of course, a bigger phone book will still take you a longer time, but it won’t grow as quickly as the proportional increase in the additional size.
We can expand the phone book example to compare other kinds of operations and their running time. We will assume our phone book has businesses (the “Yellow Pages”) which have unique names and people (the “White Pages”) which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.
Here are the running times of some operations we might perform on the phone book, from fastest to slowest:

O(1) (in the worst case): Given the page that a business’s name is on and the business name, find the phone number.

O(1) (in the average case): Given the page that a person’s name is on and their name, find the phone number.

O(log n): Given a person’s name, find the phone number by picking a random point about halfway through the part of the book you haven’t searched yet, then checking to see whether the person’s name is at that point. Then repeat the process about halfway through the part of the book where the person’s name lies. (This is a binary search for a person’s name.)

O(n): Find all people whose phone numbers contain the digit “5”.

O(n): Given a phone number, find the person or business with that number.

O(n log n): There was a mixup at the printer’s office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it’s correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.
For the below examples, we’re now at the printer’s office. Phone books are waiting to be mailed to each resident or business, and there’s a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.

O(n log n): We want to personalize the phone book, so we’re going to find each person or business’s name in their designated copy, then circle their name in the book and write a short thankyou note for their patronage.

O(n^{2}): A mistake occurred at the office, and every entry in each of the phone books has an extra “0” at the end of the phone number. Take some whiteout and remove each zero.

O(n · n!): We’re ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it’s putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they’re in the right order, and if not, it unloads them and starts over. (This is the dreaded bogo sort.)

O(n^{n}): You fix the robot so that it’s loading things correctly. The next day, one of your coworkers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot’s bugdetection systems are sophisticated enough that the robot doesn’t try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that’s been printed.
#4 Best answer 2 of What does O(log n) mean exactly?(Score: 714)
Created: 20100221 Last updated: 20141229
O(log N)
basically means time goes up linearly while the n
goes up exponentially. So if it takes 1
second to compute 10
elements, it will take 2
seconds to compute 100
elements, 3
seconds to compute 1000
elements, and so on.
It is O(log n)
when we do divide and conquer type of algorithms e.g binary search. Another example is quick sort where each time we divide the array into two parts and each time it takes O(N)
time to find a pivot element. Hence it N O(log N)
See also original question in stackoverflow
#5: What is the optimal algorithm for the game 2048? (Score: 1967)
Created: 20140312 Last updated: 20170222
Tags: algorithm, logic, artificialintelligence, 2048
I have recently stumbled upon the game 2048. You merge similar tiles by moving them in any of the four directions to make “bigger” tiles. After each move, a new tile appears at random empty position with a value of either 2
or 4
. The game terminates when all the boxes are filled and there are no moves that can merge tiles, or you create a tile with a value of 2048
.
One, I need to follow a welldefined strategy to reach the goal. So, I thought of writing a program for it.
My current algorithm:
while (!game_over) {
for each possible move:
count_no_of_merges_for_2tiles and 4tiles
choose the move with a large number of merges
}
What I am doing is at any point, I will try to merge the tiles with values 2
and 4
, that is, I try to have 2
and 4
tiles, as minimum as possible. If I try it this way, all other tiles were automatically getting merged and the strategy seems good.
But, when I actually use this algorithm, I only get around 4000 points before the game terminates. Maximum points AFAIK is slightly more than 20,000 points which is way larger than my current score. Is there a better algorithm than the above?
#5 Best answer 1 of What is the optimal algorithm for the game 2048? (Score: 1299)
Created: 20140319 Last updated: 20150508
I developed a 2048 AI using expectimax optimization, instead of the minimax search used by @ovolve’s algorithm. The AI simply performs maximization over all possible moves, followed by expectation over all possible tile spawns (weighted by the probability of the tiles, i.e. 10% for a 4 and 90% for a 2). As far as I’m aware, it is not possible to prune expectimax optimization (except to remove branches that are exceedingly unlikely), and so the algorithm used is a carefully optimized brute force search.
Performance
The AI in its default configuration (max search depth of 8) takes anywhere from 10ms to 200ms to execute a move, depending on the complexity of the board position. In testing, the AI achieves an average move rate of 510 moves per second over the course of an entire game. If the search depth is limited to 6 moves, the AI can easily execute 20+ moves per second, which makes for some interesting watching.
To assess the score performance of the AI, I ran the AI 100 times (connected to the browser game via remote control). For each tile, here are the proportions of games in which that tile was achieved at least once:
2048: 100%
4096: 100%
8192: 100%
16384: 94%
32768: 36%
The minimum score over all runs was 124024; the maximum score achieved was 794076. The median score is 387222. The AI never failed to obtain the 2048 tile (so it never lost the game even once in 100 games); in fact, it achieved the 8192 tile at least once in every run!
Here’s the screenshot of the best run:
This game took 27830 moves over 96 minutes, or an average of 4.8 moves per second.
Implementation
My approach encodes the entire board (16 entries) as a single 64bit integer (where tiles are the nybbles, i.e. 4bit chunks). On a 64bit machine, this enables the entire board to be passed around in a single machine register.
Bit shift operations are used to extract individual rows and columns. A single row or column is a 16bit quantity, so a table of size 65536 can encode transformations which operate on a single row or column. For example, moves are implemented as 4 lookups into a precomputed “move effect table” which describes how each move affects a single row or column (for example, the “move right” table contains the entry “1122 > 0023” describing how the row [2,2,4,4] becomes the row [0,0,4,8] when moved to the right).
Scoring is also done using table lookup. The tables contain heuristic scores computed on all possible rows/columns, and the resultant score for a board is simply the sum of the table values across each row and column.
This board representation, along with the table lookup approach for movement and scoring, allows the AI to search a huge number of game states in a short period of time (over 10,000,000 game states per second on one core of my mid2011 laptop).
The expectimax search itself is coded as a recursive search which alternates between “expectation” steps (testing all possible tile spawn locations and values, and weighting their optimized scores by the probability of each possibility), and “maximization” steps (testing all possible moves and selecting the one with the best score). The tree search terminates when it sees a previouslyseen position (using a transposition table), when it reaches a predefined depth limit, or when it reaches a board state that is highly unlikely (e.g. it was reached by getting 6 “4” tiles in a row from the starting position). The typical search depth is 48 moves.
Heuristics
Several heuristics are used to direct the optimization algorithm towards favorable positions. The precise choice of heuristic has a huge effect on the performance of the algorithm. The various heuristics are weighted and combined into a positional score, which determines how “good” a given board position is. The optimization search will then aim to maximize the average score of all possible board positions. The actual score, as shown by the game, is not used to calculate the board score, since it is too heavily weighted in favor of merging tiles (when delayed merging could produce a large benefit).
Initially, I used two very simple heuristics, granting “bonuses” for open squares and for having large values on the edge. These heuristics performed pretty well, frequently achieving 16384 but never getting to 32768.
Petr Morávek (@xificurk) took my AI and added two new heuristics. The first heuristic was a penalty for having nonmonotonic rows and columns which increased as the ranks increased, ensuring that nonmonotonic rows of small numbers would not strongly affect the score, but nonmonotonic rows of large numbers hurt the score substantially. The second heuristic counted the number of potential merges (adjacent equal values) in addition to open spaces. These two heuristics served to push the algorithm towards monotonic boards (which are easier to merge), and towards board positions with lots of merges (encouraging it to align merges where possible for greater effect).
Furthermore, Petr also optimized the heuristic weights using a “metaoptimization” strategy (using an algorithm called CMAES), where the weights themselves were adjusted to obtain the highest possible average score.
The effect of these changes are extremely significant. The algorithm went from achieving the 16384 tile around 13% of the time to achieving it over 90% of the time, and the algorithm began to achieve 32768 over 1/3 of the time (whereas the old heuristics never once produced a 32768 tile).
I believe there’s still room for improvement on the heuristics. This algorithm definitely isn’t yet “optimal”, but I feel like it’s getting pretty close.
That the AI achieves the 32768 tile in over a third of its games is a huge milestone; I will be surprised to hear if any human players have achieved 32768 on the official game (i.e. without using tools like savestates or undo). I think the 65536 tile is within reach!
You can try the AI for yourself. The code is available at https://github.com/nneonneo/2048ai.
#5 Best answer 2 of What is the optimal algorithm for the game 2048?(Score: 1269)
Created: 20140313 Last updated: 20200620
I’m the author of the AI program that others have mentioned in this thread. You can view the AI in action or read the source.
Currently, the program achieves about a 90% win rate running in javascript in the browser on my laptop given about 100 milliseconds of thinking time per move, so while not perfect (yet!) it performs pretty well.
Since the game is a discrete state space, perfect information, turnbased game like chess and checkers, I used the same methods that have been proven to work on those games, namely minimax search with alphabeta pruning. Since there is already a lot of info on that algorithm out there, I’ll just talk about the two main heuristics that I use in the static evaluation function and which formalize many of the intuitions that other people have expressed here.
Monotonicity
This heuristic tries to ensure that the values of the tiles are all either increasing or decreasing along both the left/right and up/down directions. This heuristic alone captures the intuition that many others have mentioned, that higher valued tiles should be clustered in a corner. It will typically prevent smaller valued tiles from getting orphaned and will keep the board very organized, with smaller tiles cascading in and filling up into the larger tiles.
Here’s a screenshot of a perfectly monotonic grid. I obtained this by running the algorithm with the eval function set to disregard the other heuristics and only consider monotonicity.
Smoothness
The above heuristic alone tends to create structures in which adjacent tiles are decreasing in value, but of course in order to merge, adjacent tiles need to be the same value. Therefore, the smoothness heuristic just measures the value difference between neighboring tiles, trying to minimize this count.
A commenter on Hacker News gave an interesting formalization of this idea in terms of graph theory.
Here’s a screenshot of a perfectly smooth grid, courtesy of this excellent parody fork.
Free Tiles
And finally, there is a penalty for having too few free tiles, since options can quickly run out when the game board gets too cramped.
And that’s it! Searching through the game space while optimizing these criteria yields remarkably good performance. One advantage to using a generalized approach like this rather than an explicitly coded move strategy is that the algorithm can often find interesting and unexpected solutions. If you watch it run, it will often make surprising but effective moves, like suddenly switching which wall or corner it’s building up against.
Edit:
Here’s a demonstration of the power of this approach. I uncapped the tile values (so it kept going after reaching 2048) and here is the best result after eight trials.
Yes, that’s a 4096 alongside a 2048. =) That means it achieved the elusive 2048 tile three times on the same board.
See also original question in stackoverflow
#6: What is tail recursion? (Score: 1821)
Created: 20080829 Last updated: 20161011
Tags: algorithm, languageagnostic, functionalprogramming, recursion, tailrecursion
Whilst starting to learn lisp, I’ve come across the term tailrecursive. What does it mean exactly?
#6 Best answer 1 of What is tail recursion? (Score: 1847)
Created: 20080831 Last updated: 20200607
Consider a simple function that adds the first N natural numbers. (e.g. sum(5) = 1 + 2 + 3 + 4 + 5 = 15
).
Here is a simple JavaScript implementation that uses recursion:
function recsum(x) {
if (x === 1) {
return x;
} else {
return x + recsum(x  1);
}
}
If you called recsum(5)
, this is what the JavaScript interpreter would evaluate:
recsum(5)
5 + recsum(4)
5 + (4 + recsum(3))
5 + (4 + (3 + recsum(2)))
5 + (4 + (3 + (2 + recsum(1))))
5 + (4 + (3 + (2 + 1)))
15
Note how every recursive call has to complete before the JavaScript interpreter begins to actually do the work of calculating the sum.
Here’s a tailrecursive version of the same function:
function tailrecsum(x, running_total = 0) {
if (x === 0) {
return running_total;
} else {
return tailrecsum(x  1, running_total + x);
}
}
Here’s the sequence of events that would occur if you called tailrecsum(5)
, (which would effectively be tailrecsum(5, 0)
, because of the default second argument).
tailrecsum(5, 0)
tailrecsum(4, 5)
tailrecsum(3, 9)
tailrecsum(2, 12)
tailrecsum(1, 14)
tailrecsum(0, 15)
15
In the tailrecursive case, with each evaluation of the recursive call, the running_total
is updated.
Note: The original answer used examples from Python. These have been changed to JavaScript, since Python interpreters don’t support tail call optimization. However, while tail call optimization is part of the ECMAScript 2015 spec, most JavaScript interpreters don’t support it.
#6 Best answer 2 of What is tail recursion?(Score: 760)
Created: 20080829 Last updated: 20180730
In traditional recursion, the typical model is that you perform your recursive calls first, and then you take the return value of the recursive call and calculate the result. In this manner, you don’t get the result of your calculation until you have returned from every recursive call.
In tail recursion, you perform your calculations first, and then you execute the recursive call, passing the results of your current step to the next recursive step. This results in the last statement being in the form of (return (recursivefunction params))
. Basically, the return value of any given recursive step is the same as the return value of the next recursive call.
The consequence of this is that once you are ready to perform your next recursive step, you don’t need the current stack frame any more. This allows for some optimization. In fact, with an appropriately written compiler, you should never have a stack overflow snicker with a tail recursive call. Simply reuse the current stack frame for the next recursive step. I’m pretty sure Lisp does this.
See also original question in stackoverflow
#7: Image Processing: Algorithm Improvement for 'CocaCola Can' Recognition (Score: 1741)
Created: 20120416 Last updated: 20160316
Tags: c++, algorithm, imageprocessing, opencv
One of the most interesting projects I’ve worked on in the past couple of years was a project about image processing. The goal was to develop a system to be able to recognize CocaCola ‘cans’ (note that I’m stressing the word ‘cans’, you’ll see why in a minute). You can see a sample below, with the can recognized in the green rectangle with scale and rotation.
Some constraints on the project:
 The background could be very noisy.
 The can could have any scale or rotation or even orientation (within reasonable limits).
 The image could have some degree of fuzziness (contours might not be entirely straight).
 There could be CocaCola bottles in the image, and the algorithm should only detect the can!
 The brightness of the image could vary a lot (so you can’t rely “too much” on color detection).
 The can could be partly hidden on the sides or the middle and possibly partly hidden behind a bottle.
 There could be no can at all in the image, in which case you had to find nothing and write a message saying so.
So you could end up with tricky things like this (which in this case had my algorithm totally fail):
I did this project a while ago, and had a lot of fun doing it, and I had a decent implementation. Here are some details about my implementation:
Language: Done in C++ using OpenCV library.
Preprocessing: For the image preprocessing, i.e. transforming the image into a more raw form to give to the algorithm, I used 2 methods:
 Changing color domain from RGB to HSV and filtering based on “red” hue, saturation above a certain threshold to avoid orangelike colors, and filtering of low value to avoid dark tones. The end result was a binary black and white image, where all white pixels would represent the pixels that match this threshold. Obviously there is still a lot of crap in the image, but this reduces the number of dimensions you have to work with.
 Noise filtering using median filtering (taking the median pixel value of all neighbors and replace the pixel by this value) to reduce noise.
 Using Canny Edge Detection Filter to get the contours of all items after 2 precedent steps.
Algorithm: The algorithm itself I chose for this task was taken from this awesome book on feature extraction and called Generalized Hough Transform (pretty different from the regular Hough Transform). It basically says a few things:
 You can describe an object in space without knowing its analytical equation (which is the case here).
 It is resistant to image deformations such as scaling and rotation, as it will basically test your image for every combination of scale factor and rotation factor.
 It uses a base model (a template) that the algorithm will “learn”.
 Each pixel remaining in the contour image will vote for another pixel which will supposedly be the center (in terms of gravity) of your object, based on what it learned from the model.
In the end, you end up with a heat map of the votes, for example here all the pixels of the contour of the can will vote for its gravitational center, so you’ll have a lot of votes in the same pixel corresponding to the center, and will see a peak in the heat map as below:
Once you have that, a simple thresholdbased heuristic can give you the location of the center pixel, from which you can derive the scale and rotation and then plot your little rectangle around it (final scale and rotation factor will obviously be relative to your original template). In theory at least…
Results: Now, while this approach worked in the basic cases, it was severely lacking in some areas:
 It is extremely slow! I’m not stressing this enough. Almost a full day was needed to process the 30 test images, obviously because I had a very high scaling factor for rotation and translation, since some of the cans were very small.
 It was completely lost when bottles were in the image, and for some reason almost always found the bottle instead of the can (perhaps because bottles were bigger, thus had more pixels, thus more votes)
 Fuzzy images were also no good, since the votes ended up in pixel at random locations around the center, thus ending with a very noisy heat map.
 Invariance in translation and rotation was achieved, but not in orientation, meaning that a can that was not directly facing the camera objective wasn’t recognized.
Can you help me improve my specific algorithm, using exclusively OpenCV features, to resolve the four specific issues mentioned?
I hope some people will also learn something out of it as well, after all I think not only people who ask questions should learn. :)
#7 Best answer 1 of Image Processing: Algorithm Improvement for 'CocaCola Can' Recognition (Score: 711)
Created: 20120416 Last updated: 20200925
An alternative approach would be to extract features (keypoints) using the scaleinvariant feature transform (SIFT) or Speeded Up Robust Features (SURF).
You can find a nice OpenCV
code example in Java
, C++
, and Python
on this page: Features2D + Homography to find a known object
Both algorithms are invariant to scaling and rotation. Since they work with features, you can also handle occlusion (as long as enough keypoints are visible).
Image source: tutorial example
The processing takes a few hundred ms for SIFT, SURF is bit faster, but it not suitable for realtime applications. ORB uses FAST which is weaker regarding rotation invariance.
The original papers
#7 Best answer 2 of Image Processing: Algorithm Improvement for 'CocaCola Can' Recognition(Score: 408)
Created: 20120417 Last updated: 20120519
To speed things up, I would take advantage of the fact that you are not asked to find an arbitrary image/object, but specifically one with the CocaCola logo. This is significant because this logo is very distinctive, and it should have a characteristic, scaleinvariant signature in the frequency domain, particularly in the red channel of RGB. That is to say, the alternating pattern of redtowhitetored encountered by a horizontal scan line (trained on a horizontally aligned logo) will have a distinctive “rhythm” as it passes through the central axis of the logo. That rhythm will “speed up” or “slow down” at different scales and orientations, but will remain proportionally equivalent. You could identify/define a few dozen such scanlines, both horizontally and vertically through the logo and several more diagonally, in a starburst pattern. Call these the “signature scan lines.”
Searching for this signature in the target image is a simple matter of scanning the image in horizontal strips. Look for a highfrequency in the redchannel (indicating moving from a red region to a white one), and once found, see if it is followed by one of the frequency rhythms identified in the training session. Once a match is found, you will instantly know the scanline’s orientation and location in the logo (if you keep track of those things during training), so identifying the boundaries of the logo from there is trivial.
I would be surprised if this weren’t a linearlyefficient algorithm, or nearly so. It obviously doesn’t address your canbottle discrimination, but at least you’ll have your logos.
(Update: for bottle recognition I would look for coke (the brown liquid) adjacent to the logo – that is, inside the bottle. Or, in the case of an empty bottle, I would look for a cap which will always have the same basic shape, size, and distance from the logo and will typically be all white or red. Search for a solid color eliptical shape where a cap should be, relative to the logo. Not foolproof of course, but your goal here should be to find the easy ones fast.)
(It’s been a few years since my image processing days, so I kept this suggestion highlevel and conceptual. I think it might slightly approximate how a human eye might operate – or at least how my brain does!)
See also original question in stackoverflow
#8: What is the best algorithm for overriding GetHashCode? (Score: 1524)
Created: 20081104 Last updated: 20190918
Tags: .net, algorithm, hashcode, gethashcode
In .NET, the GetHashCode
method is used in a lot of places throughout the .NET base class libraries. Implementing it properly is especially important to find items quickly in a collection or when determining equality.
Is there a standard algorithm or best practice on how to implement GetHashCode
for my custom classes so I don’t degrade performance?
#8 Best answer 1 of What is the best algorithm for overriding GetHashCode? (Score: 1671)
Created: 20081104 Last updated: 20210219
I usually go with something like the implementation given in Josh Bloch’s fabulous Effective Java. It’s fast and creates a pretty good hash which is unlikely to cause collisions. Pick two different prime numbers, e.g. 17 and 23, and do:
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
As noted in comments, you may find it’s better to pick a large prime to multiply by instead. Apparently 486187739 is good… and although most examples I’ve seen with small numbers tend to use primes, there are at least similar algorithms where nonprime numbers are often used. In the notquiteFNV example later, for example, I’ve used numbers which apparently work well  but the initial value isn’t a prime. (The multiplication constant is prime though. I don’t know quite how important that is.)
This is better than the common practice of XOR
ing hashcodes for two main reasons. Suppose we have a type with two int
fields:
XorHash(x, x) == XorHash(y, y) == 0 for all x, y
XorHash(x, y) == XorHash(y, x) for all x, y
By the way, the earlier algorithm is the one currently used by the C# compiler for anonymous types.
This page gives quite a few options. I think for most cases the above is “good enough” and it’s incredibly easy to remember and get right. The FNV alternative is similarly simple, but uses different constants and XOR
instead of ADD
as a combining operation. It looks something like the code below, but the normal FNV algorithm operates on individual bytes, so this would require modifying to perform one iteration per byte, instead of per 32bit hash value. FNV is also designed for variable lengths of data, whereas the way we’re using it here is always for the same number of field values. Comments on this answer suggest that the code here doesn’t actually work as well (in the sample case tested) as the addition approach above.
// Note: Not quite FNV!
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = (int) 2166136261;
// Suitable nullity checks etc, of course :)
hash = (hash * 16777619) ^ field1.GetHashCode();
hash = (hash * 16777619) ^ field2.GetHashCode();
hash = (hash * 16777619) ^ field3.GetHashCode();
return hash;
}
}
Note that one thing to be aware of is that ideally you should prevent your equalitysensitive (and thus hashcodesensitive) state from changing after adding it to a collection that depends on the hash code.
As per the documentation:
You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:
 You can compute the hash code from fields that are not mutable; or
 You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.
The link to the FNV article is broken but here is a copy in the Internet Archive: Eternally Confuzzled  The Art of Hashing
#8 Best answer 2 of What is the best algorithm for overriding GetHashCode?(Score: 472)
Created: 20110107 Last updated: 20200715
ValueTuple  Update for C# 7
As @cactuaroid mentions in the comments, a value tuple can be used. This saves a few keystrokes and more importantly executes purely on the stack (no Garbage):
(PropA, PropB, PropC, PropD).GetHashCode();
(Note: The original technique using anonymous types seems to create an object on the heap, i.e. garbage, since anonymous types are implemented as classes, though this might be optimized out by the compiler. It would be interesting to benchmark these options, but the tuple option should be superior.)
Anonymous Type (Original Answer)
Microsoft already provides a good generic HashCode generator: Just copy your property/field values to an anonymous type and hash it:
new { PropA, PropB, PropC, PropD }.GetHashCode();
This will work for any number of properties. It does not use boxing. It just uses the algorithm already implemented in the framework for anonymous types.
See also original question in stackoverflow
#9: Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing (Score: 1194)
Created: 20100816 Last updated: 20200620
Tags: algorithm, math
I had an interesting job interview experience a while back. The question started really easy:
Q1: We have a bag containing numbers
1
,2
,3
, …,100
. Each number appears exactly once, so there are 100 numbers. Now one number is randomly picked out of the bag. Find the missing number.
I’ve heard this interview question before, of course, so I very quickly answered along the lines of:
A1: Well, the sum of the numbers
1 + 2 + 3 + … + N
is(N+1)(N/2)
(see Wikipedia: sum of arithmetic series). ForN = 100
, the sum is5050
.Thus, if all numbers are present in the bag, the sum will be exactly
5050
. Since one number is missing, the sum will be less than this, and the difference is that number. So we can find that missing number inO(N)
time andO(1)
space.
At this point I thought I had done well, but all of a sudden the question took an unexpected turn:
Q2: That is correct, but now how would you do this if TWO numbers are missing?
I had never seen/heard/considered this variation before, so I panicked and couldn’t answer the question. The interviewer insisted on knowing my thought process, so I mentioned that perhaps we can get more information by comparing against the expected product, or perhaps doing a second pass after having gathered some information from the first pass, etc, but I really was just shooting in the dark rather than actually having a clear path to the solution.
The interviewer did try to encourage me by saying that having a second equation is indeed one way to solve the problem. At this point I was kind of upset (for not knowing the answer before hand), and asked if this is a general (read: “useful”) programming technique, or if it’s just a trick/gotcha answer.
The interviewer’s answer surprised me: you can generalize the technique to find 3 missing numbers. In fact, you can generalize it to find k missing numbers.
Qk: If exactly k numbers are missing from the bag, how would you find it efficiently?
This was a few months ago, and I still couldn’t figure out what this technique is. Obviously there’s a Ω(N)
time lower bound since we must scan all the numbers at least once, but the interviewer insisted that the TIME and SPACE complexity of the solving technique (minus the O(N)
time input scan) is defined in k not N.
So the question here is simple:
 How would you solve Q2?
 How would you solve Q3?
 How would you solve Qk?
Clarifications
 Generally there are N numbers from 1..N, not just 1..100.
 I’m not looking for the obvious setbased solution, e.g. using a bit set, encoding the presence/absence each number by the value of a designated bit, therefore using
O(N)
bits in additional space. We can’t afford any additional space proportional to N.  I’m also not looking for the obvious sortfirst approach. This and the setbased approach are worth mentioning in an interview (they are easy to implement, and depending on N, can be very practical). I’m looking for the Holy Grail solution (which may or may not be practical to implement, but has the desired asymptotic characteristics nevertheless).
So again, of course you must scan the input in O(N)
, but you can only capture small amount of information (defined in terms of k not N), and must then find the k missing numbers somehow.
#9 Best answer 1 of Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing (Score: 611)
Created: 20100816 Last updated: 20181210
Here’s a summary of Dimitris Andreou’s link.
Remember sum of ith powers, where i=1,2,..,k. This reduces the problem to solving the system of equations
a_{1} + a_{2} + … + a_{k} = b_{1}
a_{1}^{2} + a_{2}^{2} + … + a_{k}^{2} = b_{2}
…
a_{1}^{k} + a_{2}^{k} + … + a_{k}^{k} = b_{k}
Using Newton’s identities, knowing b_{i} allows to compute
c_{1} = a_{1} + a_{2} + … a_{k}
c_{2} = a_{1}a_{2} + a_{1}a_{3} + … + a_{k1}a_{k}
…
c_{k} = a_{1}a_{2} … a_{k}
If you expand the polynomial (xa_{1})…(xa_{k}) the coefficients will be exactly c_{1}, …, c_{k}  see Viète’s formulas. Since every polynomial factors uniquely (ring of polynomials is an Euclidean domain), this means a_{i} are uniquely determined, up to permutation.
This ends a proof that remembering powers is enough to recover the numbers. For constant k, this is a good approach.
However, when k is varying, the direct approach of computing c_{1},…,c_{k} is prohibitely expensive, since e.g. c_{k} is the product of all missing numbers, magnitude n!/(nk)!. To overcome this, perform computations in Z_{q} field, where q is a prime such that n <= q < 2n  it exists by Bertrand’s postulate. The proof doesn’t need to be changed, since the formulas still hold, and factorization of polynomials is still unique. You also need an algorithm for factorization over finite fields, for example the one by Berlekamp or CantorZassenhaus.
High level pseudocode for constant k:
 Compute ith powers of given numbers
 Subtract to get sums of ith powers of unknown numbers. Call the sums b_{i}.
 Use Newton’s identities to compute coefficients from b_{i}; call them c_{i}. Basically, c_{1} = b_{1}; c_{2} = (c_{1}b_{1}  b_{2})/2; see Wikipedia for exact formulas
 Factor the polynomial x^{k}c_{1}x^{k1} + … + c_{k}.
 The roots of the polynomial are the needed numbers a_{1}, …, a_{k}.
For varying k, find a prime n <= q < 2n using e.g. MillerRabin, and perform the steps with all numbers reduced modulo q.
EDIT: The previous version of this answer stated that instead of Z_{q}, where q is prime, it is possible to use a finite field of characteristic 2 (q=2^(log n)). This is not the case, since Newton’s formulas require division by numbers up to k.
#9 Best answer 2 of Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing(Score: 249)
Created: 20100816 Last updated: 20210329
You will find it by reading the couple of pages of Muthukrishnan  Data Stream Algorithms: Puzzle 1: Finding Missing Numbers. It shows exactly the generalization you are looking for. Probably this is what your interviewer read and why he posed these questions.
Also see sdcvvc’s directly related answer, which also includes pseudocode (hurray! no need to read those tricky math formulations :)) (thanks, great work!).
See also original question in stackoverflow
#10: Removing duplicates in lists (Score: 1152)
Created: 20111101 Last updated: 20210411
Tags: python, algorithm, list, duplicates, intersection
Pretty much I need to write a program to check if a list has any duplicates and if it does it removes them and returns a new list with the items that weren’t duplicated/removed. This is what I have but to be honest I do not know what to do.
def remove_duplicates():
t = ['a', 'b', 'c', 'd']
t2 = ['a', 'c', 'd']
for t in t2:
t.append(t.remove())
return t
#10 Best answer 1 of Removing duplicates in lists (Score: 1866)
Created: 20111101 Last updated: 20191209
The common approach to get a unique collection of items is to use a set
. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the builtin set()
function. If you later need a real list again, you can similarly pass the set to the list()
function.
The following example should cover whatever you are trying to do:
>>> t = [1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> t
[1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t)  set(s))
[8, 5, 6, 7]
As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.
Maintaining order
If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict
to keep the order of keys during insertion:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Starting with Python 3.7, the builtin dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):
>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.
Finally note that both the set
as well as the OrderedDict
/dict
solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.
#10 Best answer 2 of Removing duplicates in lists(Score: 443)
Created: 20111101 Last updated: 20171222
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
See also original question in stackoverflow
Notes:
 This page use API to get the relevant data from stackoverflow community.
 Content license on this page is CC BYSA 3.0.
score
=up votes
down votes
.
Feedback
Was this page helpful?
Glad to hear it!
Sorry to hear that.