What could be better?
First, it purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists.
Therefore, the last group is right, there will be no singularity, and the future will be bad.
The author ignores the future almost completely, in favor of having very strong opinions on which futurist movements include the right or wrong sorts of people.
The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. Third, the article wants to classify some technologies as inextricably associated with privilege, but it has a pretty weird conception of which ones they are.
So much so that of five slots for potentially worrying technology, you filled all five with the same one? Helping sick people improve their quality of life? Do gross male nerds from the outgroup support that or oppose that?
Fourth, the article presupposes a bitter conflict between the four quadrants, whereas actually people tend to be a lot more on the same side than she expects. Her pessimists are concerned about algorithmic bias making banks less likely to extend credit to poor people.
But her optimists just care about flashy new things like cryptocurrency. But one possible application for cryptocurrency is peer-to-peer microfinance via smart contracts — ie one of the most promising solutions to bias in big financial institutions.
But cryptocurrency enthusiasts are working on it, and it seems weird to deny this matters or that the whole reason behind developing some of these flashy new technologies is to solve recognized societal problems.
And her singularitarians are strategizing how to deal with far-future advanced AI algorithms, while her nonsingularitarians are strategizing how to deal with near-future primitive AI algorithms. These seem like…not entirely the opposite of each other? Imagine you were writing an article on the different kind of climatologists studying global warming.
Is this a reasonable distinction? Which kind should you be? But to try to turn these two positions into arch-enemies would be ridiculous and destructive.
The scientists involved may have different research interests and skillsets, but not necessarily different opinions.
Obviously we should have some people working on near-term problems and other people laying the groundwork to work on long-term problems. In real life, this is what futurists are doing too. The Asilomar Conference on Beneficial AI was organized by people whose main interest was far-future Singularity scenarios, but it included some of the top experts on algorithmic bias, gave the subject a lot of airtime, and ended up with all participants signing onto a set of principles urging more work both on near-term AI problems like algorithmic bias and long-term AI problems like the development of superintelligence.
Again, I feel like this is the kind of error you could only make if you totally missed that futurism was a real subject, and you just wanted to make it into a morality play for your particular political opinions. Fifth, another quote from the article: There will be no difference for the average person between a positive or negative post-singularity world and the world now?
Listen up, average person. Because you will be very, very dead. So will all the rest of us, rich and poor, old and young, black and white. I would promise you infinite wealth, but that sort of thing kind of loses its meaning in a post-scarcity society.The challenge for current and future inspection methods is to shorten the time to identify the causes of yield loss.
The techniques and equipment needed to accomplish this result can be found through an analysis of past and current inspection techniques combined with predicted future requirements for inspection and yield analysis systems.
where L S and L NS are susceptible and non-susceptible labour inputs and C is computer capital.
Computer capital is supplied perfectly elastically at market price per efficiency unit, where the market price is falling exogenously with time due to technological progress.
Strategy (from Greek στρατηγία stratēgia, "art of troop leader; office of general, command, generalship") is a high-level plan to achieve one or more goals under conditions of uncertainty. In the sense of the "art of the general", which included several subsets of skills including "tactics", siegecraft, logistics etc., the term came into use in the 6th century AD in East Roman.
Many billionaires donate through private foundations and a Donor Advised Fund is a lot like having your very own private foundation, but without all the administrative and . Why Automate Regression Testing? The improvements and new features we contribute to the Open Source Testing world put it at the forefront of automated regression testing tools for Web applications.
Principal Consultant and Enterprise Mobility MVP since Nickolaj has been in the IT industry for the past 10 years specializing in Enterprise Mobility and Security, Windows devices and deployments including automation.