Improving Lock Performance in Java--reference
After we introduced?locked thread detection?to?Plumbr?couple of months ago, we have started to receive queries similar to “hey, great, now I understand what is causing my performance issues, but what I am supposed to do now?”
We are working hard to build the solution instructions into our own product, but in this post I am going to share several common techniques you can apply independent of the tool used for detecting the lock. The methods include lock splitting, concurrent data structures, protecting the data instead of the code and lock scope reduction.
Locking is not evil, lock contention is
Whenever you face a performance ?problem with the threaded code there is a chance that you will start blaming locks. After all, common “knowledge” is that locks are slow and limit scalability. So if you are equipped with this “knowledge” and start to optimize the code and getting rid of locks there is a chance that you end up introducing nasty concurrency bugs that will surface later on.
So it is important to understand the difference between contended and uncontended locks. Lock contention occurs when a thread is trying to enter the synchronized block/method currently executed by another thread. This second thread is now forced to wait until the first thread has completed executing the synchronized block and releases the monitor. When only one thread at a time is trying to execute the synchronized code, the lock stays uncontended.
As a matter of fact, synchronization in JVM is optimized?for the uncontended case and for the vast majority of the applications, uncontended locks pose next to no overhead during execution. So, it is not locks you should blame for performance, but contended locks. Equipped with this knowledge, lets see what we can do to reduce either the likelihood of contention or the length of the contention.
Protect the data not the code
A quick way to achieve thread-safety is to lock access to the whole method. For example, take look at the following example, illustrating a naive attempt to build an online poker server:
view source print? 01.class?GameServer { 02.public?Map<<String, List<Player>> tables =?new?HashMap<String, List<Player>>(); 03.? 04.? 05.? 06.? 07.public?synchronized?void?join(Player player, Table table) { 08.if?(player.getAccountBalance() > table.getLimit()) { 09.List<Player> tablePlayers = tables.get(table.getId()); 10.if?(tablePlayers.size() <?9) { 11.tablePlayers.add(player); 12.} 13.} 14.} 15.public?synchronized?void?leave(Player player, Table table) {/*body skipped for brevity*/} 16.public?synchronized?void?createTable() {/*body skipped for brevity*/} 17.public?synchronized?void?destroyTable(Table table) {/*body skipped for brevity*/} 18.}The intentions of the author have been good - when new players?join()?the table, there must be a guarantee that the number of players seated at the table would not exceed the table capacity of nine.
But whenever such a solution would actually be responsible for seating players to tables - even on a poker site with moderate traffic, the system would be doomed to constantly trigger contention events by threads waiting for the lock to be released. Locked block contains account balance and table limit checks which potentially can involve expensive operations both increasing the likelihood and length of the contention.
First step towards solution would be making sure we are protecting the data, not the code by moving the synchronization from the method declaration to the method body. In the minimalistic example above, it might not change much at the first place. But lets consider the whole?GameServerinterface, not just the single?join()?method:
view source print? 01.class?GameServer { 02.public?Map<String, List<Player>> tables =?new?HashMap<String, List<Player>>(); 03.? 04.? 05.? 06.? 07.public?void?join(Player player, Table table) { 08.synchronized?(tables) { 09.if?(player.getAccountBalance() > table.getLimit()) { 10.List<Player> tablePlayers = tables.get(table.getId()); 11.if?(tablePlayers.size() <?9) { 12.tablePlayers.add(player); 13.} 14.} 15.} 16.} 17.public?void?leave(Player player, Table table) {/* body skipped for brevity */} 18.public?void?createTable() {/* body skipped for brevity */} 19.public?void?destroyTable(Table table) {/* body skipped for brevity */} 20.}What originally seemed to be a minor change, now affects the behaviour of the whole class. Whenever players were joining tables, the previously synchronized methods locked on theGameServer?instance (this) and introduced contention events to players trying to simultaneouslyleave()?tables. Moving the lock from the method signature to the method body postpones the locking and reduces the contention likelihood.
Reduce the lock scope
Now, after making sure it is the data we actually protect, not the code, we should make sure our solution is locking only what is necessary - for example when the code above is rewritten as follows:
view source print? 01.public?class?GameServer { 02.public?Map<String, List<Player>> tables =?new?HashMap<String, List<Player>>(); 03.? 04.? 05.? 06.? 07.public?void?join(Player player, Table table) { 08.if?(player.getAccountBalance() > table.getLimit()) { 09.synchronized?(tables) { 10.List<Player> tablePlayers = tables.get(table.getId()); 11.if?(tablePlayers.size() <?9) { 12.tablePlayers.add(player); 13.} 14.} 15.} 16.} 17.//other methods skipped for brevity 18.}then the potentially time-consuming operation of checking player account balance (which potentially can involve IO operations) is now outside the lock scope. Notice that the lock was introduced only to protect against exceeding the table capacity and the ?account balance check is not anyhow part of this protective measure.
Split your locks
When we look at the last code example, you can clearly notice that the whole data structure is protected by the same lock. Considering that we might hold thousands of poker tables in this structure, it still poses a high risk for contention events ?as we have to protect each table separately from overflowing in capacity.
For this there is an easy way to introduce individual locks per table, such as in the following example:
view source print? 01.public?class?GameServer { 02.public?Map<String, List<Player>> tables =?new?HashMap<String, List<Player>>(); 03.? 04.? 05.? 06.? 07.public?void?join(Player player, Table table) { 08.if?(player.getAccountBalance() > table.getLimit()) { 09.List<Player> tablePlayers = tables.get(table.getId()); 10.synchronized?(tablePlayers) { 11.if?(tablePlayers.size() <?9) { 12.tablePlayers.add(player); 13.} 14.} 15.} 16.} 17.//other methods skipped for brevity 18.}Now, if we synchronize the access only to the same?table?instead of all the?tables, we have significantly reduced the likelihood of locks becoming contended. Having for example 100 tables in our data structure, the likelihood of the contention is now 100x smaller than before.
Use concurrent data structures
Another improvement is to drop the traditional single-threaded data structures and use data structures designed explicitly for concurrent usage. For example, when picking?ConcurrentHashMapto store all your poker tables would result in code similar to following:
view source print? 01.public?class?GameServer { 02.public?Map<String, List<Player>> tables =?new?ConcurrentHashMap<String, List<Player>>(); 03.? 04.? 05.? 06.? 07.public?synchronized?void?join(Player player, Table table) {/*Method body skipped for brevity*/} 08.public?synchronized?void?leave(Player player, Table table) {/*Method body skipped for brevity*/} 09.? 10.? 11.? 12.? 13.public?synchronized?void?createTable() { 14.Table table =?new?Table(); 15.tables.put(table.getId(), table); 16.} 17.? 18.? 19.? 20.? 21.public?synchronized?void?destroyTable(Table table) { 22.tables.remove(table.getId()); 23.} 24.}The synchronization in?join()?and?leave()?methods is still behaving as in our previous example, as we need to protect the integrity of individual tables. So no help from?ConcurrentHashMap?in this regards. But as we are also creating new tables and destroying tables in?createTable()?and?destroyTable()methods, all these operations to the?ConcurrentHashMap?are fully concurrent, permitting to increase or reduce the number of tables in parallel.
Other tips and tricks
- Reduce the visibility of the lock. In the example above, the locks are declared?public?and are thus visible to the world, so there is there is a chance that someone else will ruin your work by also locking on your carefully picked monitors.
- Check out?java.util.concurrent.locks?to see whether any of the locking strategies implemented there will improve the solution.
- Use atomic operations. The simple counter increase we are actually conducting in example above does not actually require a lock. Replacing the Integer in count tracking withAtomicInteger?would most suit this example just fine.
Hope the article helped you to solve the lock contention issues, independent of whether you are using Plumbr?automatic lock detection solution?or manually extracting the information from thread dumps.
reference from:
http://java.dzone.com/articles/improving-lock-performance
轉(zhuǎn)載于:https://www.cnblogs.com/davidwang456/p/4243345.html
總結(jié)
以上是生活随笔為你收集整理的Improving Lock Performance in Java--reference的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 从源码角度深入分析log4j配置文件使用
- 下一篇: Java多线程编程模式实战指南(二):I