# Android image processing 2: PinchImageView source code analysis

PinchImageView
use
GestureDetector
To handle long-press, click, double-click, and inertial sliding events.
onTouchEvent
Handle events such as two-finger zoom and single-finger movement. There are two matrices, one is the external transformation matrix (mOuterMatrix), which mainly records the results of gesture operations, and the other is the internal transformation matrix (getInnerMatrix(Matrix)), which is based on
fitCenter
The initial matrix after zooming and panning in the constant zoom mode. The difference between the two matrices here may be based on the experience of PhotoView . The gesture operation and the original zoom do not affect each other. The final zoom after the gesture operation only needs two matrices to be multiplied. The following code analysis may not completely post the source code, and some have been slightly modified.

## 1 Double click, inertial sliding

Long press and click are calling callbacks. We mainly look at double click and inertial sliding.

### 1.1 Double click

PinchImageView
Only one level of zooming is done, which means that you can only switch between the maximum and initial zoom values. Basic principle: capture the double-click event, get the x and y coordinates of the double-click point, zoom in and transform the picture, and move the double-click point to the middle of the view. The code is longer, we split it a little bit. Let me introduce it first
PinchImageView
The object pool (ObjectsPool).
ObjectsPool
Maintain an object queue, and recycle objects within the capacity range. The general usage process is shown in the figure below:

1. Get in the queue
innerMatrix
Object (take()). If the queue is empty, create a new object and return, otherwise return an object from the queue and reset.
2. Get in the queue
targetMatrix
Object.
3. finish using
targetMatrix
Return (given(obj)).
4. finish using
innerMatrix
return.

The order of return does not matter.

/**
* Object pool
*
* Prevent frequent new objects from generating memory jitter.
* Due to the maximum length of the object pool, if the throughput exceeds the capacity of the object pool, jitter will still occur.
* At this time, the object pool capacity needs to be increased, but it will take up more memory.
*
* @param <T> The type of object contained in the object pool
*/
private  static  abstract  class  ObjectsPool < T > {
/**
* Maximum capacity of the object pool
*/
private  int mSize;
/**
* Object pool queue
*/
private Queue<T> mQueue;

public  ObjectsPool ( int size)  {
mSize = size;
mQueue = new LinkedList<T>();
}

public T take ()  {
//If the pool is empty, create an
if (mQueue.size() == 0 ) {
return newInstance();
} else {
//If there is one in the object pool, take one from the top and return
return resetInstance(mQueue.poll());
}
}
public  void  given (T obj)  {
//Return the object
if there are vacant seats in the object pool if (obj != null && mQueue.size() <mSize) {
mQueue.offer(obj);
}
}

abstract  protected T newInstance () ;

abstract  protected T resetInstance (T obj) ;
}
Copy code

Continue to look at the handling of the double-click event.

private  void  doubleTap ( float x, float y)  {
//Get the first layer transformation matrix
Matrix innerMatrix = MathUtils.matrixTake();
getInnerMatrix(innerMatrix);

...
MathUtils.matrixGiven(innerMatrix);
}
Copy code

The first is to obtain the internal transformation matrix.

MathUtils.matrixTake()
From
Matrix
Get one from the object pool (MatrixPool)
Matrix
Object.

public  static Matrix matrixTake ()  {
return mMatrixPool.take();
}
/**
* Get a copy of a matrix
*/
public  static Matrix matrixTake (Matrix matrix)  {
Matrix result = mMatrixPool.take();
if (matrix != null ) {
result.set(matrix);
}
return result;
}
Copy code

Then go to get the internal transformation matrix, and exist

innerMatrix
in.

public Matrix getInnerMatrix (Matrix matrix)  {
...

//Original image size
RectF tempSrc = MathUtils.rectFTake( 0 , 0 , getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
//Control size
RectF tempDst = MathUtils.rectFTake( 0 , 0 , getWidth( ), getHeight());
//Calculate the fit center matrix
matrix.setRectToRect(tempSrc, tempDst, Matrix.ScaleToFit.CENTER);

...

return matrix;
}
Copy code

MathUtils.rectFTake
with
matrixTake
The method is the same, but what is taken out is
rectF
. The key is
matrix.setRectToRect
The method has been introduced above. Continue to look down:

//current total scaling
a float innerScale = MathUtils.getMatrixScale (InnerMatrix) [ 0 ];
a float outerScale = MathUtils.getMatrixScale (mOuterMatrix) [ 0 ];
a float currentScale = innerScale * outerScale;
duplicated code

Here we multiply the internal matrix scaling and external scaling to get the final scaling. The design that does not affect the internal and external factors is really good. Next, start to calculate and scale.

float nextScale = currentScale <MAX_SCALE? MAX_SCALE: innerScale;
//If the next zoom is greater than the maximum value or less than the fit center value, take the boundary
if (nextScale> maxScale) {
nextScale = maxScale;
}
if (nextScale <innerScale) {
nextScale = innerScale;
}
//Start calculating the result matrix of the zoom animation
Matrix animEnd = MathUtils.matrixTake(mOuterMatrix);
//Calculate the multiple that needs to be zoomed
animEnd.postScale(nextScale/currentScale, nextScale/currentScale, x, y);
//Move the zoom point to the center of the control
animEnd.postTranslate(displayWidth/2f -x, displayHeight/2f -y);
...
//Start matrix animation
mScaleAnimator = new ScaleAnimator(mOuterMatrix, animEnd);
mScaleAnimator.start();
Copy code

This code is very shameful, let s sort out the idea of zooming first: double-click the picture, it must be done in the form of animation, then the beginning of the animation is naturally the current transformation position, which is transformed to the target zoom value.

nextScale
Is a multiple of
nextScale
/
currentScale
, The following gestures are recorded in the external matrix
mOuterMatrix
The principle of animation initial
matrix
Copied from
mOuterMatrix
. This code is actually problematic.
innerScale
Is the original image
fitCenter
The zoom value after transformation, assuming that the original image is very large, after transformation
innerScale
The value is 0.2f,
maxScale
Is 2, no gesture operation has been performed,
outerScale
It is 1, now look at the result of the calculation:

$currentScale = innerScale/times outerScale = 0.2/times 1 = 0.2/nextScale = 0.2 <2/?/2: 0.2 = 2\\frac{nextScale}{currentScale} =/frac{2}{0.2} = 10$

That is to say, if you double-click, the picture you see at once is magnified by 10 times... You know that many pictures are now larger in width and height than the screen of a mobile phone...

ScaleAnimator
There is only one thing done in it, constantly updated
mOuterMatrix
Value, then
invalidate
,in
onDraw
To refresh the view.

@Override
public  void  onAnimationUpdate (ValueAnimator Animation)  {
//Get animation progress
a float value = (the Float) animation.getAnimatedValue ();
//calculate progress animation interpolates intermediate matrix
for ( int I = 0 ; I < . 9 ; I ++) {
mResult[i] = mStart[i] + (mEnd[i]-mStart[i]) * value;
}
//Set up the matrix and redraw
mOuterMatrix.setValues(mResult);
...
invalidate();
}
@Override
protected  void  onDraw (Canvas canvas)  {
...
//Set the transformation matrix before drawing
setImageMatrix(getCurrentImageMatrix(matrix));
...
super .onDraw(canvas);
...
}
Copy code

After zooming and panning, the frame of the picture may enter the picture control, and the position needs to be corrected. Use the final zoomed picture boundary and the control boundary to compare and correct.

Matrix testMatrix = MathUtils.matrixTake(innerMatrix);
testMatrix.postConcat(animEnd);
RectF testBound = MathUtils.rectFTake( 0 , 0 , getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
testMatrix.mapRect(testBound);
Copy code

animEnd
What is recorded is the result of the current double-click transformation operation applied to the outer matrix, and it is combined with the inner matrix (
innerMatrix
) Is multiplied together to get the final image (
testBound
) Transformation matrix (
testMatrix
).

//Fix the position
float postX = 0 ;
float postY = 0 ;
if (testBound.right-testBound.left <displayWidth) {
= DisplayWidth PostX/. 2F - (testBound.right testBound.left +)/. 2F ;
} else  if (testBound.left> 0 ) {
postX = -testBound.left;
} else  if (testBound.right <displayWidth) {
postX = displayWidth-testBound.right;
}
...
//Apply correction location
animEnd.postTranslate(postX, postY);
Copy code

The location of the correction here is easy to understand, so I won't talk about it, and correct two errors in the source code:

postX = displayWidth/2f-(testBound.right + testBound.left)/2f;
inner
testBound.right + testBound.left
Should be
testBound.right-testBound.left
. Not posted
postY
Also change it.

### 1.2 Inertial sliding (Fling)

PinchImageView
The inertial sliding is handled by its own attenuation... The degree of attenuation is the same every time, it does not support interpolator, compared
PhotoView
use
OverScroller
To deal with sliding, it seems a bit crude.
GestureDetector
of
onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY)
Contains the acceleration of the x and y axes. The unit of acceleration is pixel/second, 60 frames per second, which is converted into pixels/frame.
velocityX/60
,
velocityY/60
.
PinchImageView
use
FlingAnimator
To do the animation, the animation updates the initial sliding distance
velocityX/60
, And then multiply by the attenuation value (
FLING_DAMPING_FACTOR
, 0.9), to be used in the next update.

//Move the image and give the result
boolean result = scrollBy(mVector[ 0 ], mVector[ 1 ], null );
mVector[ 0 ] *= FLING_DAMPING_FACTOR;
mVector[ 1 ] *= FLING_DAMPING_FACTOR;
//If the speed is too low or cannot move, it ends
if (!result || MathUtils.getDistance( 0 , 0 , mVector[ 0 ], mVector[ 1 ]) < 1f ) {
animation.cancel();
}
Copy code

scrollBy(float xDiff, float yDiff, MotionEvent motionEvent)
The method handles scrolling, mainly considering the processing of picture borders and control borders. The principle is the same as the correction position when zooming above, and the acquisition of the picture borders is also the same as when zooming.

//Get the internal transformation matrix
matrix = getInnerMatrix(matrix);
//Multiply by the external transformation matrix
matrix.postConcat(mOuterMatrix);
rectF.set( 0 , 0 , getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
matrix.mapRect(rectF);
Copy code

Last pair

mOuterMatrix
Perform translation transformation (
postTranslate
),
invalidate
trigger
onDraw
Set a new matrix for the picture.

### 3.2 Two-finger zoom, one-finger move

Two-finger zoom and single-finger movement

onTouch

#### 3.2.1 Two-finger zoom

Principle: Record the distance between two fingers on the screen. The scaling value of the unit distance is the quotient of the scaling value of the external matrix divided by this distance. Use this scaling value to multiply the distance after the two fingers slide to get a new scaling value. This scaling value performs scaling transformation on the external matrix to obtain the final external matrix.

$mScaleBase =/frac{mOuterMatrix.scale}{initialDistance}\nextScale = mScaleBase/times newDistance\y(nextScale) = k(mScaleBase)x_0(newDistance)$

It is clear,

mScaleBase
The zoom value per unit distance is the slope, which determines the speed of the two-finger zoom. Then the factors that determine the speed of the two-finger zoom are: the zoom size of the current external matrix and the initial distance between the two fingers. The larger the external matrix zoom, the smaller the initial distance between the two fingers, and the faster the two-finger sliding zoom. Another thing to pay attention to is the zoom center point of the picture, in
PinchImageView
In the two-finger zoom transformation is carried out in the identity matrix. So when you press with two fingers, you need to record the center point before the external matrix transformation, which is used in the source code
mScaleCenter
Member variables to record this point (PS: It is recommended to visually shield all the comments in the source code where this variable is used, you will be dizzy). Take a quick look at the relevant code:

private PointF mScaleCenter = new PointF();
private  float mScaleBase = 0 ;
...
public  boolean  onTouchEvent (MotionEvent event)  {
...
int action = event.getAction() & MotionEvent.ACTION_MASK;
if (action == MotionEvent.ACTION_POINTER_DOWN) {
//Switch to zoom mode
mPinchMode = PINCH_MODE_SCALE;
//Save the zoomed two fingers
saveScaleContext(event.getX( 0 ), event.getY( 0 ), event.getX( 1 ), event.getY( 1 ));
} else  if (action == MotionEvent.ACTION_MOVE) {
...
//The distance between the two zoom points
float distance = MathUtils.getDistance(event.getX( 0 ), event.getY( 0 ), event.getX( 1 ), event.getY( 1 ));
//Save the zoom point Midpoint
float [] lineCenter = MathUtils.getCenterPoint(event.getX( 0 ), event.getY( 0 ), event.getX( 1 ), event.getY( 1 ));
mLastMovePoint.set(lineCenter[ 0 ], lineCenter[ 1 ]);
//Process zoom
scale(mScaleCenter, mScaleBase, distance, mLastMovePoint);
...
}
}
Copy code

Record the current two-finger zoom mode when multi-finger presses,

saveScaleContext()
Record the above mentioned
mScaleBase
with
mScaleCenter
. in
MotionEvent.ACTION_MOVE
Handle the zoom logic in it. Take a look
saveScaleContext
Processing.

private  void  saveScaleContext ( float x1, float y1, float x2, float y2)  {
mScaleBase = MathUtils.getMatrixScale(mOuterMatrix)[ 0 ]/MathUtils.getDistance(x1, y1, x2, y2);
float [] center = MathUtils.inverseMatrixPoint(MathUtils.getCenterPoint(x1, y1, x2, y2), mOuterMatrix);
mScaleCenter.set(center[ 0 ], center[ 1 ]);
}
Copy code

mScaleBase
I have already talked about it above, but I will mainly mention it here
inverseMatrixPoint
, Look at the method definition:

public  static  float [] inverseMatrixPoint( float [] point, Matrix matrix) {
if (point != null && matrix != null ) {
float [] dst = new  float [ 2 ];
//Calculate the inverse matrix of matrix
Matrix inverse = matrixTake();
matrix.invert(inverse);
//Use the inverse matrix to transform point to dst, dst is the result
inverse.mapPoints(dst, point);
//Clear temporary variables
matrixGiven(inverse);
return dst;
} else {
return  new  float [ 2 ];
}
}
Copy code

srcMatrix.invert(targetMatrix)
Put
srcMatrix
The inverse of the matrix is stored in
targetMatrix
in,
martrix.mapPoints(targetPoint, srcPoint);
Correct
srcPoint
Apply matrix transformation and store it in
targetPoint
in. Obviously, the function of this method is to get the points before the matrix transformation.
mScaleCenter
What is stored is the position of the point before the external matrix transformation. Next, let's look at the scaling process.

private  void  scale (PointF scaleCenter, float scaleBase, float distance, PointF lineCenter)  {
...
//Calculate the zoom ratio of the picture from the fit center state to the target state
float scale = scaleBase * distance;
Matrix matrix = MathUtils.matrixTake();
//Zoom according to the zoom center of the picture, and let the zoom center be at the midpoint of the zoom point
matrix.postScale(scale, scale, scaleCenter.x, scaleCenter.y);
//Let the midpoint of the image zoom follow the midpoint of the finger zoom
matrix.postTranslate(lineCenter.x-scaleCenter.x, lineCenter.y-scaleCenter.y);
mOuterMatrix.set(matrix);
...
}
Copy code

It's easy to understand, I've already talked about it above. Tucao here, if

mOuterMatrix
Miscuts, rotations, and perspective transformations have occurred, wouldn't that be a waste? There is also a case where one finger is lifted by multiple fingers. The comments have been revised and are easy to understand.

if (action == MotionEvent.ACTION_POINTER_UP) {
if (mPinchMode == PINCH_MODE_SCALE) {
//event.getPointerCount() indicates the number of points when the finger is lifted, including the point that was lifted
if (event.getPointerCount()> 2 ) {
//event.getAction() >> 8 What you get is the index of the point that is currently lifted. The first point is lifted, so let the second and third points be the zoom control points
if (event.getAction() >> 8 == 0 ) {
saveScaleContext(event.getX( 1 ), event.getY( 1 ), event.getX( 2 ), event.getY( 2 ));
//The second point is raised, so let the first point and the third Points as zoom control points
} else  if (event.getAction() >> 8 == 1 ) {
saveScaleContext(event.getX( 0 ), event.getY( 0 ), event.getX( 2 ), event.getY( 2 ));
}
}
//If the raised point is equal to 2, then there is only one point left at this time, and it is not allowed to enter the single-finger mode, because the picture may not be in the correct position at this time
}
}
Copy code

Finally, the lower boundary needs to be corrected when letting go. enter

scaleEnd
method. Most of the code has actually been analyzed just now, here is only one variable,
scalePost
.

private  void  scaleEnd ()  {
...
getCurrentImageMatrix(currentMatrix);
float currentScale = MathUtils.getMatrixScale(currentMatrix)[ 0 ];
float outerScale = MathUtils.getMatrixScale(mOuterMatrix)[ 0 ];
//Ratio correction
float scalePost = 1f ;
//If the overall zoom ratio is greater than the maximum ratio, perform zoom correction
if ( currentScale> maxScale) {
scalePost = maxScale/currentScale;
}
//If the overall scaling of the external matrix after the correction is less than 1 (the initial value of the external matrix is  1, if the operation causes it to be smaller than the initial value, it will be restored), re-correct the scaling
if (outerScale * scalePost < 1f ) {
scalePost = 1f/outerScale;
}
}
Copy code

The comment was changed by me.

#### 3.2.1 One-finger movement

One-finger movement is mainly to call

scrollBy
, Has been analyzed before.

The analysis is basically over here.