Summary of Interview Questions Related to Concurrency and Multithreading in Xiaodi Class

Summary of Interview Questions Related to Concurrency and Multithreading in Xiaodi Class

1. What are processes, threads, and coroutines, and what is the relationship between them?

  • process:

    • It is essentially an independently executed program . The process is the basic concept of the operating system for resource allocation and scheduling. The operating system is an independent unit for resource allocation and scheduling.
  • Thread:

    • It is the smallest unit that the operating system can perform operation scheduling . It is included in the process and is the actual operating unit in the process. Multiple threads can be concurrent in a process , each thread performs a different task, and the switching is controlled by the system.
  • Coroutine:

    • Also known as micro thread, it is a kind of user-mode lightweight thread . Unlike threads and processes, which need to perform context switching on the system kernel, the context switching of a coroutine is determined by the user and has its own context. Therefore, a lightweight thread, also called a user-level thread, is called a coroutine. A thread can have multiple coroutines . The thread processes are all synchronization mechanisms, while the coroutines are asynchronous. Coroutines are not implemented in the native syntax of Java, and languages such as Python, Lua and GO are currently supported
  • relationship:

    • A process can have multiple threads, which allows the computer to run two or more programs at the same time. Thread is the smallest execution unit of a process. The scheduling of CPU switches between processes and threads. When there are more processes and threads, scheduling will consume a lot of CPU. What really runs on the CPU is the thread, and the thread can correspond to multiple coroutines:

2. Talk about the difference between concurrency and parallelism, and give an example

  • Concurrency:

    • One-core CPU simulates multiple threads and executes alternately quickly .
  • Parallelism:

    • Multi-core CPU, multiple threads can be executed at the same time ;
    • eg: Thread pool!
  • Concurrency refers to processing multiple tasks at a macro level within a period of time. Parallel refers to the fact that multiple tasks are really running at the same time at the same time.

For example:

#### Concurrency: It is multi-tasking, listening to classes and watching movies, but there is only one CPU brain, so take turns #### Parallel: Naruto shadow avatar, there are more than you appear, you can do different things each copy the code

3. What are the ways to implement multithreading in java, what are the differences, and which ones are more commonly used?

3.1 Inherit Thread

  • Inherit Thread, rewrite inside
    run()
    Method, create an instance, execute start
  • Advantages: the simplest and direct operation of code writing
  • Disadvantages: no return value, after inheriting one class, it is impossible to inherit other classes , poor scalability
public class ThreadDemo1 extends Thread { @Override public void run () { System.out.println( "Inherit Thread to implement multithreading, name:" +Thread.currentThread().getName()); } } public static void main (String[] args) { ThreadDemo1 threadDemo1 = new ThreadDemo1(); threadDemo1.setName( "demo1" ); //execute start threadDemo1.start(); System.out.println( "Main thread name:" +Thread.currentThread().getName()); } Copy code

3.2 Implement the Runnable interface

  • The custom class implements Runnable, the implementation inside
    run()
    Method, create the Thread class, use the implementation object of the Runnable interface as a parameter to pass to the Thread object, and call the Strat method.
  • Advantages: The thread class can implement multiple interfaces and can inherit another class
  • Disadvantages: no return value, can not be started directly , need to pass in to start by constructing a Thread instance
public class ThreadDemo2 implements Runnable { @Override public void run () { System.out.println( "Realize multithreading through Runnable, name:" +Thread.currentThread().getName()); } } public static void main (String[] args) { ThreadDemo2 threadDemo2 = new ThreadDemo2(); Thread thread = new Thread(threadDemo2);thread.setName ( "demo2" ); //start thread execution thread.start(); System.out.println( "Main thread name:" +Thread.currentThread().getName()); } //Adopt lambda expression after JDK8 public static void main (String[] args) { Thread thread = new Thread(() -> { System.out.println( "Realize multithreading through Runnable, name:" +Thread.currentThread().getName()); });thread.setName ( "demo2" ); //start thread execution thread.start(); System.out.println( "Main thread name:" +Thread.currentThread().getName()); } Copy code

3.3 Implement the Callable interface

  • Create an implementation class of the callable interface and implement it
    call()
    Method, combined with the FutureTask class to wrap the Callable object to achieve multi-threading.
  • Advantages: return value, high scalability
  • Disadvantages: support after jdk5 and need to be rewritten
    call()
    Method, combining multiple classes such as FutureTask and Thread
public class MyTask implements Callable < Object > { @Override public Object call () throws Exception { System.out.println( "Realize multithreading through Callable, name:" +Thread.currentThread().getName()); return "This is the return value" ; } } public static void main (String[] args) { //JDK1.8 lambda expression FutureTask<Object> futureTask = new FutureTask<>(() -> { System.out.println( "Realize multi-threading through Callable, name:" + Thread.currentThread().getName()); return "This is the return value" ; }); //MyTask myTask = new MyTask(); //FutureTask<Object> futureTask = new FutureTask<>(myTask); //FutureTask inherits Runnable and can be placed in Thread to start execution Thread thread = new Thread(futureTask);thread.setName ( "demo3" ); //start thread execution thread.start(); System.out.println( "Main thread name:" +Thread.currentThread().getName()); try { //Get the return value System.out.println(futureTask.get()); } catch (InterruptedException e) { //thrown if interrupted while waiting for blocking e.printStackTrace(); } catch (ExecutionException e) { //The execution process sends an exception and is thrown e.printStackTrace(); } } Copy code

3.4 Create a thread through the thread pool

  • Customize the Runnable interface, implement the run method, create a thread pool, call the execution method and pass in the object
  • Advantages: safe and high performance, reuse of threads
  • Disadvantages: Only supported after jdk5, need to be used in conjunction with Runnable
public class ThreadDemo4 implements Runnable { @Override public void run () { System.out.println( "Realize multi-threading through thread pool +runnable, name:" + Thread.currentThread().getName()); } } public static void main (String[] args) { //Create thread pool ExecutorService executorService = Executors.newFixedThreadPool( 3 ); for ( int i = 0 ;i< 10 ;i++){ //Thread pool executes thread tasks executorService.execute ( new ThreadDemo4()); } System.out.println( "Main thread name:" +Thread.currentThread().getName()); //Close the thread pool executorService.shutdown(); } Copy code
  • Commonly used Runnable and the fourth thread pool + Runnable, simple and convenient to expand, and high performance (the idea of pooling)

3.5 What is the difference between Runable Callable Thread?

  • Thread is an abstract class and can only be inherited, while Runable Callable is an interface, which needs to implement methods in the interface
  • Inherit Thread rewrite
    run()
    Method, to achieve the Runable interface needs to be implemented
    run()
    Method, and Callable needs to be implemented
    call()
    method
  • Thread and Runable have no return value, Callable has return value
  • Classes that implement the Runable interface cannot be called directly
    start()
    Method, need a new Thread concurrently to put the implementation class into Thread, and then call through the newly created Thread instance
    start()
    method.
  • The class that implements the Callable interface needs to use FutureTask (put the implementation class into it), and then put the FutureTask instance into the Thread, and then call it through the newly created Thread instance
    start()
    method. Obtaining the return value only needs to be called with the FutureTask instance
    get()
    Method!

4. How many states (life cycle) of the thread?

The thread has several states (6)!

public enum State { /** * Thread new state */ NEW, /** * Thread is running */ RUNNABLE, /** * Thread blocking state */ BLOCKED, /** * Thread waiting state, dead, etc. */ WAITING, /** * Thread timeout waiting state, no longer wait after a certain time */ TIMED_WAITING, /** * Thread termination status, which represents the completion of thread execution */ TERMINATED; } Copy code

5. Related methods of thread state transition: sleep/yield/join wait/notify/notifyAll

Method under Tread

##### sleep() is a method of thread Thread, which allows the thread to suspend execution and wait for the estimated time before resuming Hand over the right to use the CPU, "Will not release the lock", sleep with the lock! Enter the timeout waiting state TIME_WAITGING, and become ready Runnable at the end of sleep ##### yield() A method belonging to the thread Thread, pause the object of the current thread, to execute other threads Hand over the right to use the CPU, "The lock will not be released", similar to sleep Function: Let the threads of the same priority execute in turn, but it is not guaranteed to take turns Note: The thread will not be allowed to enter the blocking state BLOCKED, and it will become ready Runnable directly, and only need to regain the right to use the CPU ##### join() A method belonging to the thread Thread. Calling this method on the main thread will make the main thread sleep. "The lock will not be released" Let the thread calling the join method finish executing before executing other threads Similar to let the ambulance and police cars pass first! ! Copy code

Methods under Object

##### wait() belongs to the method of Object, the current thread calls the wait method of the object, "Will release the lock", enter the waiting queue of the thread Need to rely on notify or notifyAll to wake up, or wait (timeout) time to wake up automatically ##### notify() Methods belonging to Object Wake up a single thread waiting on the object monitor, "Random Wake Up" ##### notifyAll() Methods belonging to Object Wake up all threads waiting on the object monitor, "Wake up all" Copy code

Thread state transition flowchart

6. What methods can be used in Java to ensure thread safety?

  • Locking: such as synchronize/ReentrantLock
  • Use volatile to declare variables, lightweight synchronization, can not guarantee atomicity (need to explain)
  • Use thread-safe classes, such as atomic classes AtomicXXX, etc.
  • Use thread-safe collection containers, such as: CopyOnWriteArrayList/ConcurrentHashMap, etc.
  • ThreadLocal local private variables/semaphore Semaphore, etc.

7. Do you understand the volatile keyword? Can you explain the difference between it and synchronized?

Thread safe line:

Thread safety includes two aspects, visibility , atomic !

Volatile characteristics

In layman's terms, the modification of a volatile variable by thread A is visible to other threads, that is, the value of the volatile variable is updated every time the thread obtains the value of the volatile variable.

Comparison of the two

  • Volatile is a lightweight synchronized , which guarantees the visibility of shared variables. For variables modified by the volatile keyword, if the value changes, other threads are immediately visible to avoid dirty reads!
  • Volatile is lightweight and can only modify variables. synchronized heavyweight, can also modify the method
  • Volatile can only guarantee the visibility of data and cannot be used for synchronization, because concurrent access by multiple threads to volatile- modified variables will not block .
    Synchronized not only guarantees visibility, but also guarantees atomicity, because only the thread that has acquired the lock can enter the critical section, thus ensuring that all statements in the critical section are executed. When multiple threads compete for a synchronized lock object, blocking occurs .
  • Volatile : guarantee visibility, but cannot guarantee atomicity
  • synchronized : to ensure visibility, but also to ensure atomicity

scenes to be used

The write operation to the variable does not depend on the current value, such as executing a++ in multithreading, it is impossible to guarantee the atomicity of the result through volatile;

Example :

volatile int i = 0;
And a lot of thread calls
i
Increment operation, can volatile ensure the safety of variables?

No guarantee ! Volatile cannot guarantee the atomicity of variable operations!

  • The self-increment operation includes three steps, namely: read, add one, and write . Since the atomicity of these three sub-operations cannot be guaranteed, then n threads are called n times in total

    i++
    After the operation, the last
    i
    The value of is not n as everyone thinks , but a number smaller than n !

  • Explanation :

    • For example, thread A performs an auto-increment operation and just read
      i
      Initial value of
      0
      , And then it was blocked!
    • B thread now starts to execute, or read
      i
      Initial value of
      0
      , Perform an auto-increment operation, at this time
      i
      The value is
      1
    • Then the A thread is blocked, and the A
      0
      Perform plus
      1
      And write operation, after successful execution,
      i
      The value is written as
      1
      Up!
    • We expect output
      2
      , But the output is
      1
      , The output is smaller than expected!
  • Code example:

    public class VolatileTest { public volatile int i = 0 ; public void increase () { i++; } public static void main (String args[]) throws InterruptedException { List<Thread> threadList = new ArrayList<>(); VolatileTest test = new VolatileTest(); for ( int j = 0 ; j < 10000 ; j++) { Thread thread = new Thread( new Runnable() { @Override public void run () { test.increase(); } }); thread.start(); threadList.add(thread); } //Wait for all threads to finish executing for (Thread thread: threadList) { thread.join(); } System.out.print(test.i); //output 9995 } } Copy code

    summary

    Volatile does not require locking, so it will not cause thread blocking, and it is lighter than synchronized, and synchronized may cause thread blocking ! Volatile prohibits instruction rearrangement, so JVM-related optimization is gone, and the efficiency will be weak!

##### JAVA memory model referred to as JMM JMM stipulates that all variables exist in the main memory, and each thread has its own working memory. The operation of the thread on the variables is carried out in the working memory, and the main memory cannot be operated directly. Use volatile to modify the variable, you must read the latest value from the main memory before each read, and write to the main memory immediately after each write. The volatile keyword modifies the modified variable and sees its latest value at any time, if thread 1 Modify the variable v, then thread 2 can be seen immediately! Copy code

8. Volatile can avoid instruction rearrangement. Can you explain what is instruction rearrangement?

  • There are two types of instruction reordering:

    • Compiler reordering
    • Runtime reordering

When the JVM compiles java code or the CPU executes the JVM bytecode, it reorders the existing instructions, the main purpose is to optimize the operating efficiency (the premise of not changing the program result)

int a = 3 ; //step:1 int b = 4 ; //step:2 int c = 5 ; //step:3 int h = a*b*c; //step:4 Define a sequence: 1 , 2 , 3 , 4 calculation sequence: 1 , 3 , 2 , 4 , and 2 , 1 , 3 , 4 The results are the same copy the code
  • Although instruction reordering can improve execution efficiency, multi-threading may affect the results. Is there any solution?

  • Solution: memory barrier (just understand~)

    • The memory barrier is a barrier instruction, which makes the CPU a constraint on the execution results of the memory operation before and after the barrier instruction!

Expansion: the existing principle happens-before (just understand~)

The memory visibility of volatile embodies the first occurrence principle!

9. Tell me about the three elements of concurrent programming?

  • Atomicity
  • Orderliness
  • Visibility

9.1 Atomicity

  • Atomicity:

    • A smallest particle that can no longer be divided . Atomicity refers to the fact that one or more operations are either all executed successfully or all executed fail , and cannot be interrupted during the period, and there is no context switching. Thread switching will bring about atomicity problems!
int num = 1 ; //Atomic operation num++; //Non-atomic operation, read num from main memory to thread working memory, perform +1, and then write num back to main memory, //unless an atomic class is used: that is, The atomic variable class in java.util.concurrent.atomic //The solution is to use synchronized or Lock (such as ReentrantLock) to "turn" this multi-step operation into an atomic operation. //Volatile cannot be used here. As mentioned earlier: The write operation of the variable does not depend on the current value, such as more Execute a++ under thread, public class XdTest { //Method 1: Use the atomic class //AtomicInteger num = 0;//In this way, the ++ operation can ensure atomicity, without the need to lock private int num = 0 ; //Method 2: Use lock, each object has a lock, and the corresponding operation can be performed only when the lock is obtained Lock lock = new ReentrantLock(); public void add1 () { lock.lock(); try { num++; } finally { lock.unlock(); } } //Method 3: Use synchronized, and the above is an operation, this is to ensure that the method is locked, the above is the code block is locked public synchronized void add2 () { num++; } } Copy code

Solve the core idea: treat a method or code block as a whole, and ensure that it is an indivisible whole !

9.2 Orderliness

  • Orderliness:

    • The order of program execution is executed according to the order of the code, because the processor may reorder the instructions. When the JVM compiles java code or the CPU executes the JVM bytecode, it reorders the existing instructions. The main purpose is to optimize the operating efficiency. (The premise of not changing the result of the program)
int a = 3 ; //step:1 int b = 4 ; //step:2 int c = 5 ; //step:3 int h = a*b*c; //step:4 Definition order: 1 , 2 , 3 , 4 Calculation order: 1 , 3 , 2 , 4 and 2 , 1 , 3 , 4 The results are the same (in the case of single thread) Instruction reordering can improve execution efficiency, but multithreading may affect the result! Copy code

Suppose the following scenario:

//Thread 1 before(); //After processing the initialization work, you can officially run the following run method flag = true ; //Mark the resource is processed, if the resource is not processed properly, then the program may have problems //Thread 2 while (flag){ run(); //Execute core business code } //-----------------After the instruction is reordered, the order is changed, the program has problems, and it is difficult to troubleshoot-------------- --- //Thread 1 flag = true ; //Mark the resource processed, if the resource is not processed properly, there may be problems with the program at this time //Thread 2 while (flag){ run(); //Execute core business code } the before (); //initialization process, a formal process is complete before you can run the following run method copy the code

9.3 Visibility

  • Visibility:

    • A thread A's modification of a shared variable can be seen immediately by another thread B!
//Thread A executes int num = 0 ; //Thread A executes num++; //Thread B executes System.out.print( "num value:" + num); Thread A executes i++ and then executes thread B. Thread B may have 2 results, which may be 0 and 1 . Copy code

because

i++
The calculation is performed in thread A, and it is not immediately updated to the main memory , while thread B goes to the main memory to read and print. At this time, what is printed is
0
; It is also possible that thread A has finished updating to the main memory, and the value of thread B is
1
.

So you need to ensure thread visibility:
synchronized, lock and volatile can ensure thread visibility

Volatile guarantees thread visibility case: a case study of using the Volatile keyword

10. What locks are there in Java? Explain separately

Optimistic lock/pessimistic lock

  • Pessimistic lock:

    • When the thread to manipulate data, and believe that the other thread to modify data , so that it every time to get data time is always locked , when other threads get data will be blocked, such as synchronized
  • Optimistic lock:

    • Every time I go to get data when others are considered not to modify, update time will determine whether others go back to update the data, by version to determine if the data is modified refused to update , such as CAS is optimistic locking, but strictly speaking It is not a lock. The synchronization of data is guaranteed by atomicity. For example, the optimistic lock of the database is realized by version control. CAS does not guarantee thread synchronization. It is optimistic that there will be no other thread influence during data update
  • Summary: Pessimistic lock is suitable for scenarios with many write operations, optimistic lock is suitable for scenarios with many read operations , and the throughput of optimistic locks will be greater than that of pessimistic locks!

Fair lock/unfair lock

  • Fair lock:

    • Refers to multiple threads acquiring locks in the order in which they apply for locks . Simply put, if a thread group, it can be guaranteed that each thread can get locks such as ReentrantLock (the bottom layer is synchronized queue FIFO: First Input First Output to achieve)
  • Unfair lock:

    • The method of acquiring the lock is randomly acquired , and there is no guarantee that every thread can acquire the lock, that is, there are threads that starve to death and never get the lock, such as synchronized, ReentrantLock
  • Summary: The performance of unfair locks is higher than that of fair locks , and it can reuse CPU time. In ReentrantLock, you can specify whether it is a fair lock through the construction method, and the default is an unfair lock ! Synchronized cannot be designated as a fair lock, it has always been an unfair lock.

Reentrant lock/non-reentrant lock

  • Reentrant lock:

    • Also called recursive lock. After the outer layer uses the lock, the inner layer can still be used without deadlock. After a thread acquires the lock, it will automatically acquire the lock when it tries to acquire the lock. The advantage of reentrant locks is to avoid deadlocks.
  • Non-reentrant lock:

    • If the current thread executes a method and has acquired the lock, then when the method tries to acquire the lock again, it will not be blocked.
  • Summary: Reentrant locks can avoid deadlock to a certain extent. Synchronized, ReentrantLock are all reentrant locks !

Exclusive lock/shared lock

  • Exclusive lock means that the lock can only be held by one thread at a time.

    • Also called X lock/exclusive lock/write lock/exclusive lock: the lock can only be held by one thread at a time. After the lock is locked, any thread that tries to lock again will be blocked until the current thread is unlocked. Example: If thread A adds an exclusive lock to data1, other threads can no longer add any type of lock to data1. The thread that obtains the exclusive lock can both read and modify data!
  • Shared lock means that the lock can be held by multiple threads at a time.

    • Also called S lock/read lock, a lock that can view data, but cannot modify and delete data. After the lock is locked, other users can read and query data concurrently, but cannot modify, add, or delete data. The lock can be multiple Held by a thread for resource data sharing!

ReentrantLock and synchronized are both exclusive locks, the read lock of ReadWriteLock is a shared lock, and the write lock is an exclusive lock .

Mutex/read-write lock

Similar to the concept of exclusive lock/shared lock, it is the specific realization of exclusive lock/shared lock.

ReentrantLock and synchronized are mutual exclusion locks, ReadWriteLock is a read-write lock

Spin lock

  • Spin lock:

    • When a thread acquires a lock, if the lock has already been acquired by other threads, the thread will wait in a loop, and then continuously determine whether the lock can be acquired successfully, and will exit the loop until the lock is acquired. There can be at most one at any time The execution unit gets the lock.
    • There is no thread state switching, and it is always in the user state, which reduces the consumption of thread context switching. The disadvantage is that the loop consumes CPU.
  • Common spin locks: TicketLock, CLHLock, MSCLock

Deadlock

  • Deadlock:

    • In the execution process of two or more threads, due to competition for resources or due to a blocking phenomenon caused by communication with each other, if there is no external force, they will not be able to let the program go on!

The following three are the optimizations made by Jvm to improve the efficiency of lock acquisition and release. Synchronized lock upgrades. The state of the lock is indicated by the field in the object header of the object monitor, which is an irreversible process.

  • Bias lock:

    • A piece of synchronization code has been accessed by a thread, then the thread will automatically acquire the lock, and the cost of acquiring the lock is lower!
  • Lightweight lock:

    • When the lock is a biased lock and is accessed by other threads, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock by spinning, but it will not block, and the performance will be higher!
  • Heavyweight lock:

    • When the lock is a lightweight lock, although other threads are spinning, the spin will not continue to cycle. When the lock is spinning for a certain number of times and the lock has not been acquired, it will enter blocking, and the lock will be upgraded to weight Class locks, heavyweight locks will block other application threads, and performance will decrease!

11. Write a multi-threaded deadlock example

The thread applies for lock B when it has acquired lock A and has not released it. At this time, another thread has acquired lock B, and must acquire lock A before releasing lock B. Therefore, the closed loop occurs and falls into a deadlock loop:

public class DeadLockDemo { private static String locka = "locka" ; private static String lockb = "lockb" ; public void methodA () { synchronized (locka){ System.out.println( "I acquired lock A in method A " +Thread.currentThread().getName() ); //Give up CPU execution rights and do not release the lock try { Thread.sleep( 2000 ); //sleep does not release the lock } catch (InterruptedException e) { e.printStackTrace(); } synchronized (lockb){ System.out.println( "I got lock B in method A " +Thread.currentThread().getName() ); } } } public void methodB () { synchronized (lockb){ System.out.println( "I got lock B in method B " +Thread.currentThread().getName() ); //Give up CPU execution rights and do not release the lock try { Thread.sleep( 2000 ); //sleep does not release the lock } catch (InterruptedException e) { e.printStackTrace(); } synchronized (locka){ System.out.println( "I got lock A in method B " +Thread.currentThread().getName() ); } } } public static void main (String [] args) { System.out.println( "The main thread runs and starts to run:" +Thread.currentThread().getName()); DeadLockDemo deadLockDemo = new DeadLockDemo(); new Thread(()->{ deadLockDemo.methodA(); }).start(); new Thread(()->{ deadLockDemo.methodB(); }).start(); System.out.println( "The main thread has finished running:" +Thread.currentThread().getName()); } } Copy code

4 necessary conditions for deadlock:

  • Mutually exclusive conditions : resources cannot be shared and can only be used by one thread!
  • Request and hold conditions : The thread has obtained some resources, but it is blocked due to requesting other resources, and the obtained resources are not released!
  • Non-preemption : Some resources cannot be forcibly occupied. When a thread obtains this resource, the system cannot forcibly reclaim it, but can only be released by the thread after it is used up!
  • Loop waiting condition : multiple threads form a circular chain, each of which occupies the next resource requested by the other!

As long as a deadlock occurs, the above conditions are all true, as long as one is not satisfied, there will be no deadlock !

12. Design a simple non-reentrant lock

Non-reentrant lock: If the current thread executes a method and has already acquired the lock, when the method tries to acquire the lock again, it will not be acquired and blocked!

public class UnreentrantLock { private boolean isLocked = false ; //lock method public synchronized void lock () throws InterruptedException { System.out.println( "Enter lock to lock" +Thread.currentThread().getName()); //Determine whether it has been locked, if it is locked, the currently requested thread waits while (isLocked){ System.out.println( "Enter wait and wait" +Thread.currentThread().getName()); wait(); } //If it has not been locked, then lock isLocked = true ; } //unlock method public synchronized void unlock () { System.out.println( "Enter unlock to unlock" +Thread.currentThread().getName()); isLocked = false ; //wake up a thread in the object lock pool notify(); } } public class Main { private UnreentrantLock unreentrantLock = new UnreentrantLock(); //The suggestion for locking is in try, and the suggestion for unlocking is in finally public void methodA () { try { unreentrantLock.lock(); System.out.println( "methodA method is called" ); //methodB() is called nested in methodA() to test whether methodB() can obtain the execution right of the lock methodB(); } catch (InterruptedException e){ e.fillInStackTrace(); } finally { unreentrantLock.unlock(); } } public void methodB () { try { unreentrantLock.lock(); System.out.println( "methodB method is called" ); } catch (InterruptedException e){ e.fillInStackTrace(); } finally { unreentrantLock.unlock(); } } public static void main (String [] args) { //Demonstrate whether you can rush into the same thread! (If the single thread is not reentrant, there is no need to talk about multithreading~) new Main().methodA(); } } //the same thread, repeated failure to acquire the lock, the formation of a deadlock, this is not reentrant lock copy the code

13. Design a simple reentrant lock

Re-entrant lock: also called recursive lock, after the outer layer uses the lock, the inner layer can still be used without deadlock

public class ReentrantLock { private boolean isLocked = false ; //The thread used to record whether it is reentrant private Thread lockedOwner = null ; //Accumulate the number of locks, add 1 to lock once, and decrease 1 to unlock once private int lockedCount = 0 ; //lock method public synchronized void lock () throws InterruptedException { System.out.println( "Enter lock to lock" +Thread.currentThread().getName()); //Get the current thread Thread thread = Thread.currentThread(); //Determine whether the same thread acquires the lock, lockedOwner != Comparison of thread reference addresses //If the lock is already locked, and the current thread is not the previously locked thread, it will block and wait! while (isLocked && lockedOwner != thread ){ System.out.println( "Enter wait and wait" +Thread.currentThread().getName()); System.out.println( "Current lock status isLocked = " +isLocked); System.out.println( "The current count number lockedCount = " +lockedCount); wait(); } //If there is no lock, or the current thread is the previously locked thread, then: //Perform the lock, the two threads have the same address, and the number of locks ++ isLocked = true ; lockedOwner = thread; lockedCount++; } //Unlock method public synchronized void unlock () { System.out.println( "Enter unlock to unlock" +Thread.currentThread().getName()); //Get the current thread Thread thread = Thread.currentThread(); //The lock added by thread A can only be unlocked by thread A, and other threads B cannot unlock if (thread == this .lockedOwner){ lockedCount--; if (lockedCount == 0 ){ //unlock isLocked = false ; lockedOwner = null ; //wake up a thread in the object lock pool notify(); } } } } public class Main { //private UnreentrantLock unreentrantLock = new UnreentrantLock(); private ReentrantLock reentrantLock = new ReentrantLock(); //Locking is recommended in try, unlocking is recommended in finally public void methodA () { try { reentrantLock.lock(); System.out.println( "methodA method is called" ); methodB(); } catch (InterruptedException e){ e.fillInStackTrace(); } finally { reentrantLock.unlock(); } } public void methodB () { try { reentrantLock.lock(); System.out.println( "methodB method is called" ); } catch (InterruptedException e){ e.fillInStackTrace(); } finally { reentrantLock.unlock(); } } public static void main (String [] args) { for ( int i = 0 ;i< 10 ;i++){ //Demonstrates the same thread new Main().methodA(); } } } Copy code

[External link image transfer failed. The source site may have an anti-hotlinking mechanism. It is recommended to save the image and upload it directly (img-ttX91EiY-1613882316860) )]

14. Introduce your understanding of synchronized?

Source code analysis article reference: Synchronized analysis of java synchronization series

  • Synchronized is to solve the problem of thread safety, and is often used in synchronizing common methods, static methods, and code blocks!
  • Synchronized is not fair and reentrant locks!
  • Each object has a lock and a waiting queue. The lock can only be held by one thread, and other threads that need the lock need to block and wait. After the lock is released, the object will be taken out of the queue and awakened. It is uncertain which thread to wake up, and fairness is not guaranteed

15. Explain what is CAS? And the ABA problem?

CAS full name: Compare and Swap compare and exchange

Unsafe implementation principle, refer to the article: Unsafe analysis of java magic

  • The bottom layer of CAS implements atomic operations through the Unsafe class, and the operation contains three operands:

    • Object memory address (V):
    • Expected original value (A):
    • New value (B)
  • Understanding method 1: Compare the value in the current working memory with the value in the main memory, if this value is expected, then perform the swap operation! If not, just keep looping!

  • Understanding method 2: If the value in the memory address matches the expected original value, the processor will automatically update the value of the address to the new value . If in the first round of the loop, a thread gets the value in the address by b If the thread is modified, the a thread needs to spin , and it will have a chance to execute in the next loop.

CAS belongs to optimistic lock, and its performance is greatly improved compared with pessimistic lock !

The bottom layer of atomic classes such as AtomicXXX is the CAS implementation, which is better than synchonized to a certain extent, because the latter is a pessimistic lock !

When Mr. Xiaodi talked about this, he had a hard time understanding the Mengxin who first came into contact with CAS. Here I refer to the understanding of Mr. Crazy God when introducing CAS: Let s start with a case:

Case:

public class CASDemo { //CAS compareAndSet: Compare and exchange! public static void main (String[] args) { AtomicInteger atomicInteger = new AtomicInteger( 2020 ); //Expect, update //public final boolean compareAndSet(int expect, int update) //If the value I expect is reached, then update, otherwise, //do not update, CAS is the CPU's concurrency primitive! System.out.println(atomicInteger.compareAndSet( 2020 , 2021 )); //true System.out.println(atomicInteger.get()); //2021 //atomicInteger.getAndIncrement()//See how the bottom layer is implemented++ System.out.println(atomicInteger.compareAndSet( 2020 , 2021 )); //false System.out.println(atomicInteger.get());//2021 } } Copy code

Let's take a look

getAndIncrement()
The underlying implementation of the method:

public class AtomicInteger extends Number implements java . io . Serializable { private static final long serialVersionUID = 6214790243416807050L ; //UnSafe class, the bottom layer is to call C++: Java cannot manipulate memory, so here we use C++ to manipulate memory private static final Unsafe unsafe = Unsafe.getUnsafe(); private static final long valueOffset; static { try { //Get the memory offset value valueOffset valueOffset = unsafe.objectFieldOffset (AtomicInteger.class.getDeclaredField( "value" )); } catch (Exception ex) { throw new Error(ex);} } //The value is modified by volatile to avoid instruction rearrangement, and to ensure thread visibility and order private volatile int value; ... public final int getAndIncrement () { //Parameters: //this: current object //valueOffset: memory offset address of current object //1: value return unsafe.getAndAddInt( this , valueOffset, 1 ); } ... } Copy code

After getting a general understanding of UnSafe, we continue to click into the getAndIncrement() method and view the getAndAddInt() method called by unsafe:

//Located in the UnSafe class //Parameters: var1 current object, var2 current object memory offset address, var4 value (1) public final int getAndAddInt (Object var1, long var2, int var4) { int var5; //used here To spin lock: when a thread acquires a lock, if the lock has been acquired by other threads, the thread will wait in a loop, and then constantly judge whether the lock can be acquired successfully, and will exit the loop until the lock is acquired. At most one execution unit can acquire the lock. do { //Get the value of the original object in the memory address var5 = this .getIntVolatile(var1, var2); //Realize the self-increment +1 function of the getAndIncrement() method with the help of CAS comparison and exchange! } while (! this .compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; } ... //Call C++, perform comparison and exchange public final native boolean compareAndSwapInt (Object var1, long var2, int var4, int var5) ; copy the code

CAS: Compare the value in the current working memory with the value in the main memory, if this value is expected, then perform the operation! If not just

Keep looping!

CAS's ABA problem?

Civet cat for prince

public class CasAbaTest { //CAS compareAndSet: compare and exchange! public static void main (String[] args) { AtomicInteger atomicInteger = new AtomicInteger( 2020 ); /* * Similar to the SQL we usually write: optimistic locking * * If a thread is performing an operation on an object, if other threads operate on the object, * Even if the content of the subject has not changed, you need to tell me. * * Expectation, update: * public final boolean compareAndSet(int expect, int update) * If the desired value is reached, then update, otherwise, do not update, * CAS is the concurrency primitive of the CPU! */ //============== The messing thread ================= System.out.println(atomicInteger.compareAndSet( 2020 , 2021 )); System.out.println(atomicInteger.get()); System.out.println(atomicInteger.compareAndSet( 2021 , 2020 )); System.out.println(atomicInteger.get()); //============== Expected thread ================= System.out.println(atomicInteger.compareAndSet( 2020 , 6666 )); System.out.println(atomicInteger.get()); } } Output result: to true 2021 to true 2020 to true 6666 Copy the code

In the above case: suppose that the thread we expect originally needs to be changed from 2020 to 6666, but there is a messy thread that rushes to execute before the expected thread, first replace 2020 to 2021, and then replace 2021 back to 2020!

So it seems that when the expected thread is executed, the initial value is still 2020 unchanged, but in fact the replacement operation has been performed twice in the messing thread, and our expected thread is not aware of it! This is the ABA problem!

How to solve the ABA problem?

In essence, it is equivalent to using an optimistic locking strategy to solve the ABA problem!

public class CASDemo { /** * AtomicStampedReference Note, * If the generic is a wrapper class, you need to pay attention to the object reference problem * Normally in business operations, the comparisons here are all objects */ //Parameter 1: Initial value 100 //Parameter 2: Initial corresponding version number initialStamp=1 static AtomicStampedReference<Integer> atomicStampedReference = new AtomicStampedReference<>( 100 , 1 ); //CAS compareAndSet: compare and exchange! public static void main (String[] args) { //Thread A: new Thread(()->{ //When the thread executes, first obtain the version number of the initialStamp int stamp = atomicStampedReference.getStamp(); System.out.println( "The version number obtained by thread A for the first time is:" +stamp); try { TimeUnit.SECONDS.sleep( 2 ); } catch (InterruptedException e) { e.printStackTrace(); } //cas comparison and exchange: 100--->101 atomicStampedReference.compareAndSet( 100 , 101 , atomicStampedReference.getStamp(),// // atomicStampedReference.getStamp() + 1); System.out.println("A 2 " +atomicStampedReference.getStamp()); //cas :101--->100 System.out.println("A 2 CAS " + atomicStampedReference.compareAndSet( 101, 100, atomicStampedReference.getStamp(), atomicStampedReference.getStamp() + 1)); System.out.println("A 3 " +atomicStampedReference.getStamp()); },"A").start(); // // B: new Thread(()->{ // int stamp = atomicStampedReference.getStamp(); System.out.println("B 1 "+stamp); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } //cas :100--->99 System.out.println("B 1 CAS " + atomicStampedReference.compareAndSet( 100, 99, stamp, stamp + 1)); System.out.println("B 2 " +atomicStampedReference.getStamp()); },"B").start(); } }

initialStamp CAS +1

A 1 1 B 1 1 A 2 2 A 2 CAS true A 3 3 B 1 CAS false B 2 3

MySQL version

Integer -128 ~ 127 valueOf new valueOf new

-128-127 2 flase ~

16. AQS

AQS

AQS AbstractQueuedSynchronizer java synchronized java.util.concurrent.locks

Java CountDownLatch ReentrantLock Semaphore ReentrantReadWriteLock SynchronousQueue FutureTask AQS

AQS AQS !

17. ReentrantLock synchronized ?

  • ReentrantLock synchronized

  • synchronized

    • 1 java
    • 2
    • 3
    • 4
  • ReentrantLock

    • 1 Lock
    • 2 ( )
    • 3 finally
    • 5 true false
    • 6 AQS state FIFO

18. ReentrantReadWriteLock ReentrantLock

ReentrantReadWriteLock

1 ReadWriteLock

2 AQS

3

4

ReentrantLock synchronized A/B A B A C ReentrantLock ReadWriteLock !

19. BlockingQueue

BlockingQueue

  • ArrayBlockingQueue ArrayBlockingQueue
  • put
  • take

BlockingQueue juc

  • 1
  • 2

  • ArrayBlockingQueue

    • FIFO ;
  • LinkedBlockingQueue

    • Integer.MAX_VALUE
      FIFO ;
  • PriorityBlockingQueue

    • java.lang.Comparable ;
  • DelayQueue

    • java.util.concurrent.Delayed CompareTo getDelay ;

ConcurrentLinkedQueue

Java ConcurrentLinkedQueue

20. java ?

  • newFixedThreadPool

  • newCachedThreadPool

  • newSingleThreadExecutor

  • newScheduledThreadPool

    • /

Executors ThreadPoolExecutor

Executors ThreadPoolExecutor ThreadPoolExecutor ##### newFixedThreadPool newSingleThreadExecutor: LinkedBlockingQueue Integer.MAX_VALUE OOM newScheduledThreadPool newCachedThreadPool: Integer.MAX_VALUE OOM

ThreadPoolExecutor

public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)
  • corePoolSize
    keepAliveTime
    corePoolSize

  • maximumPoolSize

  • keepAliveTime
    corePoolSize

  • unit
    keepAliveTime TimeUnit.SECONDS TimeUnit.MILLISECONDS

  • workQueue
    ArrayBlockingQueue LinkedBlockingQueue SynchronousQueue

  • threadFactory

  • handler
    RejectedExecutionHandler maximumPoolSize 4

    • AbortPolicy
    • CallerRunsPolicy
    • DiscardOldestPolicy
    • DiscardPolicy