03-10 04:07 - 'parallel block validation sort of addresses the big blocks problem and the sighash problem (though I would prefer a sigop limit on transactions). But as for the big blocks problem, miners set their Acceptance Depth (AD) to wha...' by /u/lexensi1 removed from /r/Bitcoin within 74-79min
''' parallel block validation sort of addresses the big blocks problem and the sighash problem (though I would prefer a sigop limit on transactions). But as for the big blocks problem, miners set their Acceptance Depth (AD) to whatever they want. So a block that is too large will have to have many blocks mined on top of it as well before it is accepted. The only way that can happen is if a majority of miners agree that the block is not too large. As for malleability, there is the flextrans proposal but I don't know if it's under consideration by BU or not. Segwit doesn't solve malleability once and for all either, because the old-style transactions are still valid, so exchanges and wallets and other software still need to take into account that it's possible in the way they program their software. Not sure what the median EB attack is. Firstly AD is now 12. Therefore EB=1MB miners can get 12 blocks orphaned, which would take an expected 4 hours. There would be no warning for users and they could see funds wiped from their wallets after 11 confirmations. After this then all the EB = 1MB miners would have their sticky gates triggered, whilst all the EB = 1.1MB would have their sticky gates closed. Now a malicious miner can split the hashrate 50/50 again. This time the smaller blockers ironically on the larger block chain and vica versa. It would be a massive confusing mess. It does not actually address either, unfortunately. A mining consortium would be perfectly capable of gaming Bitcoin mining with larger-than-tolerable blocks (or more-than-tolerable cumulative sighash ops within those blocks), regardless of whether smalleleaner alternative blocks were able to be validated in parallel to them. This particular risk vector actually compounds on itself, too. Initially, a coalition of 50.1% of hashrate could (possibly even accidentally, especially due to network limiters like the Great Firewall of China) mine and extend-upon blocks that are larger than the other 49.9% are able to validate competitively. Even if the 49.9% of miners are able to validate smaller blocks in parallel, they will ultimately be doomed trying to compete with the 50.1%, and as their orphan rates climb and their profitability declines, they would eventually be forced to shut down (assuming they are motivated by profit). This means that the remaining 50.1% of miners now make up the entire mining network... and the process can then repeat, with fewer participants on each iteration. Peter Todd also explained this idea very well years ago. Parallel block validation, while an important step forward, unfortunately does nothing to address the underlying issue here. That's why most Bitcoin engineers consider flex-cap proposals to be untenable unless they include proper incentive-alignment controls (e.g. the sacrifice of mining rewards in exchange for larger allowed block-sizes ''' Context Link Go1dfish undelete link unreddit undelete link Author: lexensi1
Don't worry about the mempool being backed up now -- that's me liquidating the attacker's addresses
The attackers used p2sh addresses that had easily guessable scriptSigs (they lacked a signature altogether to redeem). I ended up liquidating about ~1.2BCH of their funds just now in 3k tx's. Each tx has 133 inputs at about 15 sigops each. There is a sigop limit per block of 20,000. So you will see the mempool now has lots of tx's and is 18MB full as of the time of this writing. These tx's are all the special tx's that have a lot of sigops that I made to liquidate (take) the attacker's funds. It should clear in 2 days. Your normal non-sigops abusing tx's will not be affected and will confirm way before mine! I am the only one waiting in line. :) But damn.. it felt good to hit the attackers back. Here is a sample tx taking $0.23 at a time.. for a total of ~$500 :): https://blockchair.com/bitcoin-cash/transaction/0354a371f08130986eeedaa08ef69b73630a2182b5f8a8e595a7a9f6603604f2
Disclaimers: I am a Bitcoin Verde developer, not an ABC developer. I know C++, but I am not completely familiar with ABC's codebase, its flow, and its nuances. Therefore, my explanation may not be completely correct. This explanation is an attempt to inform those that are at least semi- tech-savvy, so the upgrade hiccup does not become a scary boogyman that people don't understand. 1- When a new transaction is received by a node, it is added to the mempool (which is a collection of valid transactions that should/could be included in the next block). 2- During acceptance into the mempool, the number of "sigOps" is counted, which is the number of times a signature validation check is performed (technically, it's not a 1-to-1 count, but its purpose is the same). 2a- The reason behind limiting sigops is because signature verification is usually the most expensive operation to perform while ensuring a transaction is valid. Without limiting the number of sigops a single block can contain, an easy DOS (denial of service) attack can be constructed by creating a block that takes a very long to validate due to it containing transactions that require a disproportionately large number of sigops. Blocks that take too long to validate (i.e. ones with far too many sigops) can cause a lot of problems, including causing blocks to be slowly propagated--which disrupts user experience and can give the incumbent miner a non-negligible competitive advantage to mine the next block. Overall, slow-validating blocks are bad. 3- When accepted to the mempool, the transaction is recorded along with its number of sigops. 3a- This is where the ABC bug lived. During the acceptance of the mempool, the transaction's scripts are parsed and each occurrence of a sigop is counted. When OP_CHECKDATASIG was introduced during the November upgrade, the procedure that counted the number of sigops needed to know if it should count OP_CHECKDATASIG as a sigop or as nothing (since before November, it was not a signature checking operation). The way the procedure knows what to count is controlled by a "flag" that is passed along with the script. If the flag is included, OP_CHECKDATASIG is counted as a sigop; without it, it is counted as nothing. Last November, every place that counted sigops included the flag EXCEPT the place where they were recorded in the mempool--instead, the flag was omitted and transactions using OP_CHECKDATASIG were logged to the mempool as having no sigops. 4- When mining a block, the node creates a candidate block--this prototype is completely valid except for the nonce (and the extended nonce/coinbase). The act of mining is finding the correct nonce. When creating the prototype block, the node queries the mempool and finds transactions that can fit in the next block. One of the criteria used when determining applicability is the sigops count, since a block is only allowed to have a certain number of sigops. 4a- Recall the ABC bug described in step 3a. The number of sigops for transactions using OP_CHECKDATASIG is recorded as zero--but only during the mempool step, not during any of the other operations. So these OP_CHECKDATASIG transactions can all get grouped up into the same block. The prototype block builder thinks the block should have very few sigops, but the actual block has many, many, sigops. 5- When the miner module is ready to begin mining, it requests the prototype block the in step 4. It re-validates the block to ensure it has the correct rules. However, since the new block has too many sigops included in it, the mining software starts working on an empty block (which is not ideal, but more profitable than leaving thousands of ASICs idle doing nothing). 6- The empty block is mined and transmitted to the network. It is a valid block, but does not contain any other transactions other than the coinbase. Again, this is because the prototype block failed to validate due to having too many sigops. This scenario could have happened at any time after OP_CHECKDATASIG was introduced. By creating many transactions that only use OP_CHECKDATASIG, and then spending them all at the same time would create blocks containing what the mempool thought was very few sigops, but everywhere else contained far too many sigops. Instead of mining an invalid block, the mining software decides to mine an empty block. This is also why the testnet did not discover this bug: the scenario encountered was fabricated by creating a large number of a specifically tailored transactions using OP_CHECKDATASIG, and then spending them all in a 10 minute timespan. This kind of behavior is not something developers (including myself) premeditated. I hope my understanding is correct. Please, any of ABC devs correct me if I've explained the scenario wrong. EDIT: markblundeberg added a more accurate explanation of step 5 here.
A new user sent me this to my inbox, its a description of the events after the fork, with a signed message at the bottom. I've gone through it once but its very late here in my timezone, have to go through it again tomorrow. I'm sure I'm not the the only receipient, but just in case pinging some people here. https://honest.cash/kiarahpromise/sigop-counting-4528 *** EDIT 2 *** Before you continue. From the Bitcoin whitepaper: " The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes." *** EDIT *** Ok, I have slept over this. How big is the chance that these two events, the sigop tx spamming of the network and the intended theft of funds stuck in segwit by an unknown miner, were coordinated and not coincidential? I slept over this message and am wondering if that was one two-phased plan and even this message was planned (probably a bit different but it was adapted afterwards to the new situation, that's why the first half of it is such a mess to read) to spread fear after the two plans got foiled. The plan consisted of various Acts Act 1) Distract and spam the network with sigop transactions that exploit a bug to cause distraction and halt all BCH transaction volume. The mempool would become filled with unconfirmed transactions Act 2) When a patch is deployed, start your mining pool and mine the hell out of it to quickly create a legitimate block. They prepared the theft transactions and would hide them in the (predicted) massive mempool of unconfirmed transactions that would have been accumulated. They would mine a big block, everyone would be so happy that BCH works again, and devs would be busy looking for sigop transactions. Act 3) Hope that the chain gets locked in via checkpoint so the theft cannot be reverted Act 4) Leak to the media that plenty of BCH were stolen after the fork and the ABC client is so faulty it caused a halt of the network after the upgrade Act 5) Make a shitload of money by shorting BCH (there was news about a appearance of a big short position right after the fork) But the people who planned this attack have underestimated the awareness and speed of the BCH dev team. They were probably sure that Act 1 would take hours or even days so the mempool would be extremely bloated (maybe they speculated that everyone paniced and wanted to get out of BCH) and Act 2 would consequently be successful because no one would spot their theft transactions quick enough. But they didn't calculate that someone is working together with various BCH pools in precaution to prevent exactly this scenario (segwit theft) and even prepared transactions to move all locked coins back to their owners. Prohashings orphaned block was likely unpredicted collateral damage as Jonathan suggests below, because they were not involved in the plan of the two pools who prepared to return the segwit coins. I'm guessing that the pools did not expect a miner with an attacking theft block that early and had to decide quickly what to do when they spotted it. So now that both plans have been foiled, Plan B) is coming into place again. Guerrilla style fear mongering about how BCH is not decentralized. Spread this info secretly in the community with the proof in form of a signed message connected to the transactions. Of course, the attacker worked actually alone, attacked us for our own good, and will do so again, because the evil dictatorship devs have to be eradicated.... As an unwanted side effect of these events the BTC.top and BTC.com "partnership" has been exposed. So what do we do with this new revelation is a question that we probably have to discuss. They worked together with someone who wanted to return the segwit coins and avoided a theft. They used their combined hashing dominance to avoid a theft. I applaud them for that. From a moral perspective this is defendable and my suspicion that we have more backing for BCH than you can see with your eye by following hash rate charts is now being revealed as true again. But the dilemma BCH has is revealed again as well. we need more of the SHA-256 hash rate cake because we actually do not want that any entity in this space has more than 50% hash power. *** EDIT 2 *** Added Satoshi's quote from the whitepaper.
Recently we've seen some shouts from memo users which touched on the mempool acceptance policies. This post is a higher level introduction of how we can manage mempool issues. This isn't a direct answer to those shouts, just brings a better understanding for all. In any full node there is a mempool of validated transactions. Back in 2014 or so we had some attacks where people were sending millions of transactions to the network and the effect was full nodes going down because they ran out of memory. We initially had some ideas on how to protect the node but we quickly realized that we had to have a simple goal; Always accept real, money transactions while limiting inflow of non-money transactions. What this means in real world terms is that if someone is spamming the network with silly transactions in order to make it slow and unusable, we can distinguish between those and others where people standing in the store and wanting to pay for something won't ever notice this "attack". Again, all this is to protect the full node from being overwhelmed and having too many transactions in their memory-pool, causing it to run out of memory and crash. The main way to do this has been discussed a couple of years ago. The main approach in Bitcoin Core is fees. And nothing but fees. Lets improve on that and define a list of priorities;
Coin-age of spent coin (days-destroyed). Older is better.
Ratio of inputs to outputs in one transaction. More inputs is better.
Sigops count. Less is better.
Transaction size in bytes. Smaller is better.
Fees paid to the miner.
For instance we already have, and have had for many years, a free-transaction-limiter. Which means that zero-fee transactions are allowed, but only a certain number per minute are accepted. In the memo case it violates the first rule in a particularly spectacular fashion. Without offsetting it with any of the other points being significantly better. In the coming years we'll see all mining nodes implement the above priority list, where nodes protect themselves from being overwhelmed with cheap transactions by rejecting ones that show very low effort. At the same time people that spend money in the store will typically have a very good score on the priorities table and those will always be accepted in the mempool.
"Infinity" patch for Bitcoin Core v0.12.1, v0.13.2, v0.14.0 — Support SegWit *and* larger blocks
run a full node
are a user, not a miner
don't particularly care how large the blocks are
are concerned about undiscovered bugs in Bitcoin Unlimited
want to support SegWit and larger blocks
…then this patch is for you. This patch contains the minimal changes necessary to make Bitcoin Core accept blocks of any size (up to the overall message size limit of 32 MiB). It does this without removing or neutering the protections against blocks with excessive numbers of signature operations ("sigops"). The maximum number of sigops allowed scales linearly with the size (weight) of the block. Blocks at or smaller than Core's current limit are treated exactly the same as by unpatched Bitcoin Core, meaning this patch will have no effect until and unless a hard fork to larger blocks occurs. If a hard fork does occur, nodes running this patch will follow whichever chain demonstrates the most work, regardless of the sizes of the blocks in that chain. This means that nodes running this patch may diverge from nodes running unpatched Bitcoin Core. Apply this patch only if you understand and agree to bear the risks involved. Why might you want to use this patch? Core users: If there's a hard fork, you're going to want a way to control your BTU balance. Your Core wallet won't see BTU-only outputs. You could run an instance of Bitcoin Unlimited alongside your Bitcoin Core node to access these BTU-only outputs, but you might be concerned about bugs in Bitcoin Unlimited, and you might not want to actively participate in this whole "emergent consensus" thing. By running a second Bitcoin Core instance with this "Infinity" patch, you will be able to access your BTU balances without needing to run Bitcoin Unlimited. Unlimited users: If you want to increase on-chain capacity, then you might want to support both SegWit and larger base blocks. Maybe you don't really know what to set "EB" and "AD" to; maybe you'd rather not have to care. If you simply want to follow whichever chain has the most work, then you don't need the complexity (and risks) of Bitcoin Unlimited. By running your node with this "Infinity" patch, you will have the best of both worlds. Where is the patch? You can get the patch for your preferred version of Bitcoin Core here (see the links at the bottom).
This release also provides an RPC called 'signdata' to generate signatures compatible with the CHECKDATASIG opcode. Like 220.127.116.11 it is compatible with both Bitcoin Cash and SV changes to the consensus rules. SV features set is disabled by default, the default policy is to activate the set of changes as defined by the bitcoincash.org. List of notable changes and fixes to the code base:
Fix gitian build for macOS
Improve the script fuzz testing
In GBT, match fees and sigops with the correct tx
Improve propagation of non-final and too-long-mempool-chain transactions by deferring them until the relevant block arrives
New RPC: signdata to generate signatures compatible with the CHECKDATASIG opcode
Help needed diagnosing another Bitcoin Unlimited Cash orphaned block
We had yet another bitcoin cash orphan this morning, at 7:11:23am EST. I attached the log and the getinfo() results below. I remember that jtoomim has said he was willing to look at logs, so perhaps he or someone else can figure this one out. In this case, it does not appear as if bandwidth restrictions had any impact. The daemon never hit the bandwidth cap at any time, before or after the block was found by Bitcoin Unlimited Cash. The block was accepted by the daemon as valid, and then our checker later determined that it wasn't present on the main chain. Does this log contain any information that could assist in determining why the orphan rate is around 5%? I thought that it should be lower than that.
Letting FEES float without letting BLOCKSIZES float is NOT a "market". A market has 2 sides: One side provides a product/service (blockspace), the other side pays fees/money (BTC). An "efficient market" is when players compete and evolve on BOTH sides, approaching an ideal FEE/BLOCKSIZE EQUILIBRIUM.
Or to put it in other terms which pretty everyone has heard of: A market is about supply and demand (not just demand).
People buying and selling widgets isn't a "fee market" it's a "widget market".
People buying and selling blockspace isn't a "fee market" - it's a "blockspace market". (Yes, Virginia, MINERS SELL BLOCKSPACE, and it's a generic commodity.)
The terminology "fee market" is totally retarded: When you're looking at a market, you name it based on the product/service being provided, not based on the money being paid. When you talk about the price of a loaf of bread or a gallon of milk, you don't talk about a goddamn "dollar market" - you talk about the "baked goods market" or the "dairy market". And in a market, you don't freeze the supply of something. (Remember, the supply of BITCOINS is fixed. But the supply of BITCOIN TRANSACTIONS is not fixed - it can and should rise, to accommodate demand. This probably sounds too obvious to mention - but I have actually seen idiots posting on r\bitcoin who got these two things mixed up.) When we say that we want a market to be "efficient", that's also a TWO-PART PROPOSITION:
We want the product/service to be high-quality (and available in sufficient supply)
We want the product/service to be low-price
Blockspace is a product/service, and like all products/services, it migrates to the cheapest place where it can be produced, which these days means mainly in China. And like all products/services - we want the product/service to be the highest possible quality for the lowest possible price. Translated into Bitcoin terms, that means that we want:
security & efficiency: no double-spends, no transaction delays
low fees: miners should be reasonably compensated for their service, but we shouldn't let them suck of more fees artificially limiting blockspace
This whole post is based on the very important essay on Medium.com posted today by u/Noosterdam:
Core is Breaking Bitcoin's Store-of-Value Function: Artificially limiting the blocksize to create a “fee market” = a backdoor way to raise the 21M coin cap
(It's time we started recognizing these people as being leading voices regarding the economic fundamentals of Bitcoin. They have emerged organically over the years, because they have been right about so many of Bitcoin's economic aspects - unlike many of the paid "experts" from Blockstream, many of whom have been totally clueless about Bitcoin's economic aspects.) (And it's also time we started recognizing the dangers of a centralized cartel forming create artificial blockspace scarcity and artificial fee inflation - which, as u/Noosterdamreminded us today, is just as bad as money inflation.) Ever heard of "supply and demand"? The phrase "fee market" only talks about the demand side, while deliberately ignoring the supply side. Sorry, but that's not how you do economics. "Demand-side economics" is just as ridiculous as "supply-side" economics. Both are fraudulent. So, let's look honestly at both sides of the market. What do we see?
From the perspective of miners, many of them might think in terms of a "fee market".
From the perspective of miners, many of them might think in terms of a "blockspace market". (Not just to ensure low fees - but also to ensure that their transactions even get through on time / at all during high-traffic periods.)
Miners and users are both important Maybe users haven't seemed as "important" as miners so far, in the grand scheme of things. "Fee-paying users" are of course a more decentralized group than "blockspace-providing miners" - which might be part of the reason why devs haven't invited users to meetings in Hong Kong or Silicon Valley to whisper sweet nothings in their ears about giving users what they want. Each group (miners and users) has its own goals:
Many miners would be happy to see a "fee market" develop - with users competing to pay higher and higher fees (as much as their wallet will bear)
Many users would be happy to see a "blockspace market" develop - with miners competing to provide more and more blockspace (as much as their infrastructure will bear)
If you only support half of this (the "fee market" half, and not the "blockspace market"), then:
either you're clueless about economics and markets - or
you're trying to mislead people so you can get centralized control over Bitcoin.
Either way, good luck with that. If Core / Blockstream / certain miners only focus on creating a "fee market" without also creating a "blockspace" market, then the only thing they're going to accomplish in the long run is turning Bitcoin into a shitcoin - because some other coin without artificial blockspacer scarcity quickly come along and efficiently use the bandwidth and disk space and memory and processing cycles and electricity available, and overtake Bitcoin. (This could be an alt-coin - or it could be an upgrade to Bitcoin, such as Bitcoin Unlimited.) Bitcoin's value depends on two factors The value proposition of Bitcoin is based on TWO aspects:
We need to be able to prevent miners from INCREASING THE COIN SUPPLY OF 21 MILLION
We need to be able to prevent miners from FREEZING THE BLOCKSIZE IN ORDER TO INCREASE THE FEES
The price of a bitcoin is something we want to keep HIGH - to avoid DILUTING our wealth. This incentivizes us to keep the Bitcoin supply FIXED (21M). The price of a Bitcoin transaction is something we want to keep LOW - to avoid ERODING our wealth (miners sucking up our BTC via high fees). This incentivizes us to keep Bitcoin fees LOW. Don't let the miners unilaterally sneak artificial fee inflation into Bitcoin by artificially limiting the blocksize! Seriously, it's time to throw the discredited, fraudulent phrase "fee market" into the dustbin of history - and use something that actually paints the correct economic picture, like "fee/blocksize equilibrium".
Until there is a real, working, live release of lightning network, it is irresponsible to tout it as a solution
Furthermore, once it is out, it will have to pass the test of time, -- the same kind of test of time Bitcoin had to pass when it was released (at least a year or two to ensure its viable and working without major hiccups/crashes/other downfalls such as subject to extreme regulation(which I think is virtually inevitable especially if it grew to any significant size, or if it's peer-to-peer that that data would have to be stored via a blockchain.. lol, not doing anything to solve data-storage bloat which core members so adamantly are trying to limit (i.e.: lukejr wanting 300kb blocks). segwit relies on lightning network for scaling, but we don't even know if it's practical (which I don't think it is), yet they are trying to give the cart before the horse. Imo it's like testing if a new type of bitcoin would be successful, having to go through all same growth cycles as bitcoin to become viable. Also correct me if I'm wrong, if we do segwit, and it turns out lightning network is ineffective and we need to scale blocks the "old fashioned way of increasing the blocksize," then rather than a simple increase of 1mb->2mb increasing block size only 1mb, if we do this increase with segwit, it will cost us up to 4x as much per megabyte. Is my understanding of this correct? If so this is a major setback for scaling when Bitcoin needs to grow to 4mb, and 8mb. (as potentially 4x more space is needed, creating more data storage requirements and subject to more spam vulnerability (ironically the same thing that lukejr and others are trying to avoid). Edit: I'm unsure how much segwit will increase the average transaction size, but it's clear to me that it will increase average transaction size, since it would add more data/instructions within the Bitcoin blocks.
Segwit resolves it by making the hashing more like H(H(tx_with_signatures_removed)||0)... so the inner hash can be cached and then the hashing only grows like O(N). So it cannot just be fixed by changes to implementations, it's inherent in the format.
As Tom Zander points out, QH is actually already been fixed for years: https://www.reddit.com/btc/comments/6m45t2/reminder_why_do_we_need_segwit_spoiler_we_dont/ And there are multiple ways to fix it as Dr. Wright spoke about. Greg makes it sound like Segwit is the only software implementation that can deal with it, when the word implementation is ambigious here. He is only right in a very narrow sense of the word implementation... not in the sense that he conveys and tries to fool people with. But he words it very carefully, like a master politician. Bottom line: Don't expect Greg to be fully honest about Bitcoin, at least until we've forked away from his perverted vision of Bitcoin as a settlement network.
As far as I can see : 0.13.1 with SegWit will be released soon. 95% will not be reached. Segwit won't be activated unless ViaBTC and others(?) can keep +5% network hashpower Let's assume SegWit gets blocked for months/years, what's the plan on either side ? Also, is 95% hardcoded in 0.13.1 ? Any chance 0.13.1 will be released with a lower limit ? Is blocking SegWit a political stance or because of a genuine technological concern ? I was under the impression SegWit is OK, but on-chain scaling with bigger blocks should also be implemented, preferably before SegWit. Now I read more and more SegWit isn't OK, because it introduces 'banking like' hubs .... I know, lots of questions, but can someone enlighten me and others. Edit : 'banking like' hubs is a concern about LN ... what's the concern exactly about SegWit ?
Bitcoin Cash brings sound money to the world. Merchants and users are empowered with low fees and reliable confirmations. The future shines brightly with unrestricted growth, global adoption, permissionless innovation, and decentralized development. Sigops. For templates with "segwit" enabled as a rule, the "sigoplimit" and "sigops" keys must use the new values as calculated in BIP 141. Block Assembly with Witness Transactions. When block assembly is done without witness transactions, no changes are made by this BIP, and it should be assembled as previously. Today, at precisely 9 a.m. ET on May 15, 2020, the Bitcoin Cash network completed another upgrade adding new features to the blockchain. Bitcoin transactions are identified by a 64-digit hexadecimal hash called a transaction identifier (txid) which is based on both the coins being spent and on who will be able to spend the results of the transaction. Unfortunately, the way the txid is calculated allows anyone to make small modifications to the transaction that will not change its meaning, but will change the txid. This is ... Sigops. Sigops per block is currently limited to 20,000. We change this restriction as follows: Sigops in the current pubkey script, signature script, and P2SH check script are counted at 4 times their previous value. The sigop limit is likewise quadrupled to ≤ 80,000. Each P2WPKH input is counted as 1 sigop.
26 Detecting and Surviving Data Races using Complementary Schedules
💓 Bitcoin muestra signos de vida: - Después de empezar la semana con una caída de casi 5%, ayer el precio presentó un movimiento importante al alza subiendo ... The 15th Bitcoin Cash Development video meeting for 2019 took place on October 24 at 15:00 UTC. Participants: Amaury Séchet - Bitcoin ABC Antony Zegers - Bitcoin ABC Jason B. Cox - Bitcoin ABC ... Banking on Bitcoin YouTube Movies. 2017 · Documentary; 1:23:41. SIGOS Integrated Test Environment SITE - Duration: 1:47. SIGOS 277 views. 1:47. Thomas Sowell on the Myths of Economic Inequality ... http://sigops.org/sosp/sosp11/current/index.html#26-veeraraghavan ¿Que es el dinero de Internet? ¿Es BitCoin el dinero del futuro? Conoce con este vídeo en Lengua De Signos Española la famosa moneda BitCoin, su seguridad y ...