• Breaking News

    Saturday, March 30, 2019

    Ethereum Week in Ethereum News is out!

    Ethereum Week in Ethereum News is out!


    Week in Ethereum News is out!

    Posted: 30 Mar 2019 03:04 PM PDT

    What's New in Eth2 -- 29 March 2019 [The Taking Stock Edition]

    Posted: 30 Mar 2019 05:30 AM PDT

    A Vision of a System Registry for The World Computer (related to dType: ERC-1882)

    Posted: 30 Mar 2019 11:02 AM PDT

    Pando - Decentralized GitHub- live on Rinkeby !!!

    Posted: 30 Mar 2019 12:10 PM PDT

    ETH Staking reward Calculator!

    Posted: 30 Mar 2019 04:59 AM PDT

    DApp deployments, podcast appearances and xDai validators! It's been a very busy month and we have a ton of great and exciting updates for you! Lets dive into it!

    Posted: 30 Mar 2019 12:48 PM PDT

    The Cost of ASIC Design - Kristy-Leigh Minehan @OhGodAGirl - A response to the infamous #ProgPoW Tumblr post

    Posted: 30 Mar 2019 06:31 AM PDT

    VLog #1 - A personal message from Bitcoin to Vitalik

    Posted: 30 Mar 2019 02:56 PM PDT

    Hack on Evernote notes over crypto holdings allowed hackers to steal plenty

    Posted: 30 Mar 2019 04:07 PM PDT

    Not much of the details, I'm one of the dumbs who kept some notes over my cold storages in Evernote. Since yesterday my tokens and ETH were moved away, and tracing the routes I see more who potentially lost their funds. Got a message about 'suspicious activity' from Evernote recently? Store your crypto details in there? Fix it now!

    submitted by /u/kirilivanov
    [link] [comments]

    Prediction Markets Wont Work, Here's why

    Posted: 30 Mar 2019 07:11 PM PDT

    Prediction markets wont work.

    I'm not just saying this. Serious research has been done on the subject to say the exact same thing.

    Because of the know-it-all attitude I run into with many computer scientist, which I hate😁. I take extra caution to make sure I don't do the same. I come to say why prediction markets wont work after doing a massive amount of thinking with evidence to back it up. Perhaps it'll be analogical to ripping the band-aid off. Modifying expectations for investors, and forcing the producers of such a market to make solid game theoretical modifications to such markets to provide massive value over time. I'm not saying they won't work 100%, they'll slightly work. I am saying they won't be as high impact as we think they will be. For them to reach high impact, they need to make a lot of little changes to reach major objectives.

    I'll be referring to Duncan Watt's book, Everything Is Obvious: \Once You Know the Answer*. He's a physicist turned sociologist, turned computer scientist. His book got me deeply into systems theory and complexity in relation to the socio-economic realm years ago. So much that I'm now focusing and staking my entire career on it, even though so far the socio-econo-physics, complexity and analytics industry hasn't yielded any productive results for the world since its inception (besides a compromise of privacy to sell things). We still haven't solved market crashes, inequality, wars between countries, global warming and massive global debt. We've provided little value so far. My goal is to at some point change that over time; to prove our worth as an industry and provide value to people using the blockchain as a medium.

    The book is an easy read, and has references to why prediction markets wont work. Inside of the book he talks about common sense, and how it fails us for large scale problems. We're going to only focus on the prediction side in this piece. If I get a reasonable response for this (not necessarily a good one), I'll write more about complexity economics, social complexity and systems theory.

    Reasons Why Prediction Markets Won't Work

    1. Predicting Large Complex Systems Is Extremely Difficult

    In chapter 7, between pages 161 and 171, Duncan Watts talks heavily about us making predictions on complex adaptive systems. Generally, the more complex and large the systems are, the more difficult it is to predict the events that follow. This is especially the case when you, and all others have a massive degree of information asymmetry.

    Duncan Watts stated the following about complex systems:

    In complex systems, however, which comprise most of our social and economic life, the best we can hope for is to reliably estimate the probabilities with which certain kinds of events will occur. Second, common sense also demands that we ignore the many uninteresting, unimportant predictions that we could be making all the time, and focus on those outcomes that actually matter. In reality ... black swan events that we most wish we could have predicted are not really events at all, but rather shorthand descriptions—"the French Revolution," "the Internet," "Hurricane Katrina," "the global financial crisis"—of what are in reality whole swaths of history. Predicting black swans is therefore doubly hopeless, because until history has played out it's impossible even to know what the relevant terms are.

    He doesn't mean that everything in existence is unpredictable. He does say later that there's a fine line between predictable elements and unpredictable ones. He followed with a statement:

    To oversimplify somewhat, there are two kinds of events that arise in complex social systems—events that conform to some stable historical pattern, and events that do not ... Every year, for example, each of us may or may not be unlucky enough to catch the flu ... because seasonal influenza trends are relatively consistent from year to year, drug companies can do a reasonable job of anticipating how many flu shots they will need to ship to a given part of the world in a given month ... consumers with identical financial backgrounds may vary widely in their likelihood of defaulting on a credit card, depending on what is going on in their lives ... credit card companies can do a surprisingly good job of predicting aggregate default rates by paying attention to a range of socioeconomic, demographic, and behavioral variables. And Internet companies are increasingly taking advantage of the mountains of Web-browsing data generated by their users to predict the probability that a given user will click on a given search result.

    Prediction markets don't distinguish what's reasonably predictable or what's not. The larger and more abstract the event, the more likely it is we won't be able to interpret a solid prediction of what's real or not.

    2. Prediction Markets Provide Little Gain Compared to Statistical Studies

    The prospect of prediction markets are very nice.

    Inside of that same chapter, Watts put some focus on prediction markets. Starting with the introduction of the idea.

    One increasingly popular method is to use what is called a prediction market—meaning a market in which buyers and sellers can trade specially designed securities whose prices correspond to the predicted probability that a specific outcome will take place. - p 164

    Continuing with understanding why our sentiment of prediction markets is so high. I too get excited about the idea of them, and he stated the potential scenarios of how they would interact:

    The potential of prediction markets to tap into collective wisdom has generated a tremendous amount of excitement among professional economists and policy makers alike. Imagine, for example, that a market had been set up to predict the possibility of a catastrophic failure in deep-water oil drilling in the Gulf prior to the BP disaster in April 2010. Possibly insiders like BP engineers could have participated in the market, effectively making public what they knew about the risks their firms were taking. Possibly then regulators would have had a more accurate assessment of those risks and been more inclined to crack down on the oil industry before a disaster took place. Possibly the disaster could have been averted.

    However, he and many others have done studies in prediction markets to test if this can accurately happen.

    Watts tested for the accuracy of markets compared to statistical mechanics:

    little attention has been paid to evaluating the relative performance of different methods, so nobody really knows for sure. To try to settle the matter, my colleagues at Yahoo! Research and I conducted a systematic comparison of several different prediction methods, where the predictions in question were the outcomes of NFL football games. To begin with, for each of the fourteen to sixteen games taking place each weekend over the course of the 2008 season, we conducted a poll in which we asked respondents to state the probability that the home team would win as well as their confidence in their prediction. We also collected similar data from the website Probability Sports, an online contest where participants can win cash prizes by predicting the outcomes of sporting events. Next, we compared the performance of these two polls with the Vegas sports betting market—one of the oldest and most popular betting markets in the world—as well as with another prediction market, TradeSports. And finally, we compared the prediction of both the markets and the polls against two simple statistical probability that home teams win—which they do 58 percent of the time—while the second model also factored in the recent win-loss records of the two teams in question. In this way, we set up a six-way comparison between different prediction methods—two statistical models, two markets, and two polls.

    The results:

    Given how different these methods were, what we found was surprising: All of them performed about the same. To be fair, the two prediction markets performed a little better than the other methods*, which is consistent with the theoretical argument above. But the very best performing method—the Las Vegas Market—was only about 3 percentage points more accurate than the worst-performing method, which was the model that always predicted the home team would win with 58 percent probability.*

    He's a reasonable scientist. When he crafted a result, he tested it to see if he was wrong using other data-sets. We generally call it falsification. It's the process of testing a hypothesis for inaccuracies. Doing this in the social realm is very hard, yet also requires the same amount of rigor. He followed it with another set of studies.

    The first talked to the prediction market researchers:

    When we first told some prediction market researchers about this result, their reaction was that it must reflect some special feature of football ... Football games, in other words, have a lot of randomness built into them—arguably, in fact, that's what makes them exciting. In order to be persuaded, our colleagues insisted, we would have to find the same result in some other domain for which the signal-to-noise ratio might be considerably higher than it is in the specific case of football.

    So they tested for baseball. This was their results for that.

    We compared the predictions of the Las Vegas sports betting markets over nearly twenty thousand Major League baseball games played from 1999 to 2006 with a simple statistical model based again on home-team advantage and the recent win-loss records of the two teams. This time, the difference between the two was even smaller—in fact, the performance of the market and the model were indistinguishable. In spite of all the statistics and analysis, in other words, and in spite of the absence of meaningful salary caps in baseball and the resulting concentration of superstar players on teams like the New York Yankees and Boston Red Sox, the outcomes of baseball games are even closer to random events than football games. - p170

    3. Some People Want to See The World Burn

    This is the final reason.

    In the Dark Knight, Alfred had a power speech directly to Bruce Wayne. He made the statement "some people just want to watch the world burn". This is a problem generally faced with every game theoretical problem. It faces characters like the Joker in the Batman, Hisoka in Hunter X Hunter. The characters that just destroy for the fun of it.

    Duncan Watts actually explored the concept. He states the following:

    ... it exposed a potential vulnerability of the theory, which assumes that rational traders will not deliberately lose money. The problem is that if the goal of a participant is instead to manipulate perceptions of people outside the market (like the media) and if the amounts involved are relatively small (tens of thousands of dollars, say, compared with the tens of millions of dollars spent on TV advertising), then they may not care about losing money, in which case it's no longer clear what signal the market is sending.

    Prediction markets forget about the idea of reflexivity, and the desire just to destroy stuff. Ultimately there's no protections against this. Even if they were to find enough active participants, you would have to worry about somebody with $1-2 million dollars just to influence somebody's perceptions on small, yet significant ideas, it could wreck havoc on people using such markets to plan. Especially if those plans are leveraged. It's a problem of opportunity cost. Generally, if I earn more for destroying your system from another, even if I earn indirectly, I'll just do it because why not?

    It's essentially the same problem behind market manipulation. People would be fine destroying the market if they get some indirect benefit from it. George Soros did it when he broke the Bank of England, some unknown figures did it when they tanked the market below $6000. It's easy. There's no defense mechanism against it in international markets, where anybody with a computer can tap in and blow things up.

    I recall seeing a recent article by somebody on this subreddit. He was putting together a solution to reduce the uncooperative games people may want to play and convert them into cooperative games using staking as a means to limit options. It works to an extent, but it runs the problem of destructive tendencies and opportunity cost. It also requires identity, which I doubt people will subscribe to if they don't have to.

    That's it

    So that concludes why I reason prediction markets wont work. It's mostly an analysis. I infer, that just because you can't use them in one way, doesn't mean you can use them in others. I would say that it's unreasonable to believe that's the case. The predictions are just too big in range and not heavily well defined.

    Again, if I get feedback to this I'll post on other topics like .

    Sources and Bits of Information:

    1. Ian Ayres (author of Supercrunchers ) calls the relative performance of prediction markets "one of the great unresolved questions of predictive analytics" ( http://freakonomics.blogs.nytimes.com/2009/12/23/prediction-markets-vs-super-crunching-which-can-better-predict-how-justice-kennedy-will-vote/ ).
    2. To be precise, we had different amounts of data for each of the methods—for example, our own polls were conducted over only the 2008–2009 season, whereas we had nearly thirty years of Vegas data, and TradeSports predictions ended in November 2008, when it was shut down—so we couldn't compare all six methods over any given time interval. Nevertheless, for any given interval, we were always able to compare multiple methods. See Goel, Reeves, et al. (2010) for details.
    submitted by /u/kivo360
    [link] [comments]

    EthereumJS VM v3.0.0: Stack & Memory Refactoring, ES6 Classes

    Posted: 30 Mar 2019 03:00 AM PDT

    We have got a new release out: https://github.com/ethereumjs/ethereumjs-vm/releases/tag/v3.0.0

    This release comes with a modernized ES6-class structured code base, some significant local refactoring work regarding how Stack and Memory are organized within the VM and it finalizes a first round of module structuring now having separate folders for bloom, evm and state related code. The release also removes some rarely used parts of the API (hookedVM, VM.deps).

    All this is to a large extend preparatory work for a v4.0.0 release which will follow in the next months with TypeScript support and more system-wide refactoring work leading to a more modular and expandable VM and providing the ground for future eWASM integration. If you are interested in the release process and want to take part in the refactoring discussion see the associated issue #455.

    VM Refactoring/Breaking Changes

    • New Memory class for evm memory manipulation, PR #442
    • Refactored Stack manipulation in evm, PR #460
    • Dropped createHookedVm (BREAKING), being made obsolete by the new StateManager API, PR #451
    • Dropped VM.deps attribute (please require dependencies yourself if you used this), PR #478
    • Removed fakeBlockchain class and associated tests, PR #466
    • The petersburg hardfork rules are now run as default (before: byzantium), PR #485

    Modularization

    • Renamed vm module to evm, move precompiles to evm module, PR #481
    • Moved stateManager, storageReader and cache to state module, PR #443
    • Replaced static VM logTable with dynamic inline version in EXP opcode, PR #450

    Code Modernization/ES6

    • Converted VM to ES6 class, PR #478
    • Migrated stateManager and storageReader to ES6 class syntax, PR #452

    Bug Fixes

    • Fixed a bug where stateManager.setStateRoot() didn't clear the _storageTries cache, PR #445
    • Fixed longer output than return length in CALL opcode, PR #454
    • Use BN.toArrayLike() instead of BN.toBuffer() (browser compatibility), PR #458
    • Fixed tx value overflow 256 bits, PR #471

    Maintenance/Optimization

    • Use BN reduction context in MODEXP precompile, PR #463

    Documentation

    • Fixed API doc types for Bloom filter methods, PR #439

    Testing

    • New Karma browser testing for the API tests, PRs #461, PR #468
    • Removed unused parts and tests within the test setup, PR #437
    • Fixed a bug using --json trace flag in the tests, PR #438
    • Complete switch to Petersburg on tests, fix coverage, PR #448
    • Added test for StateManager.dumpStorage(), PR #462
    • Fixed ecmul_0-3_5616_28000_96 (by test setup adoption), PR #473
    submitted by /u/HolgerD77
    [link] [comments]

    Eth/mist wallet error. How to fix?

    Posted: 30 Mar 2019 07:37 AM PDT

    I bought Ethereum during the 2014 pre-sale and a json file was provided to me. My question is about Ether Classic. How can I access the Ether Classic that was given to all Ethereum holders? I tried going to Myetherwallet under Ether Classic and upload the file but the balance says 0. No issue w ETH

    Posted: 30 Mar 2019 10:03 AM PDT

    ConsenSys’s Cava as Apache Tuweni

    Posted: 30 Mar 2019 01:02 AM PDT

    Lane Rettig (Ethereum core developer): "Ethereum governance has failed."

    Posted: 30 Mar 2019 12:54 PM PDT

    The District Weekly — March 30th, 2019

    Posted: 30 Mar 2019 10:37 AM PDT

    No comments:

    Post a Comment