Skip to main content

Using Gesture Recognisers to Handle Pinch, Rotate, Pan, Swipe and Tap Gestures

Gesture Recognisers to Handle Pinch, Rotate, Pan, Swipe and Tap Gestures

A gesture recogniser is actually an object of the abstract class UIGestureRecogniser. Such an object is related to a view, and monitors for predefined gestures made on that view. Going one level deeper, I would say that gestures are actually touches and movements of one or more fingers that happen on a specific area of the screen, where a view of interest exists there. In the early versions of iOS SDK, gestures recognisers were not provided to developers, so implementing such ways of interaction required a lot of manual work. Thankfully, Apple wrapped up all that manual work and gave it to developers as a single tool, and that way working with gestures became a really easy part of the iOS programming.

 The UIGestureRecogniser class as an abstract one, it can’t be used directly. There are specific              subclasses of it that are provided for usage, and each one of them deals with a specific kind of              gesture. Let’s go through them for a while and let’s see what the respective gestures are for:

   1. UITapGestureRecogniser: This class regards the tap gestures made on a view. It can be used to            handle single or multiple taps, either with one or more fingers. Tapping is one of the most usual          gestures that users make.

   2. UISwipeGestureRecogniser: Another important gesture is the swipe, and this class exists just for        it.   Swiping happens when dragging a finger towards a direction (right, left, top and down). A            characteristic example of the swipe gesture exists on the Photos app, where we use our fingers to        slide from one photo to another.

   3. UIPanGestureRecogniser: The pan gesture is actually a drag gesture. It’s used when it’s needed           to      drag views from one point to another.

   4. UIPinchGestureRecogniser: When you view photos on the Photos app and you use your two               fingers to zoom in or out to a photo, then you perform a pinch gesture. As you understand,                   pinching requires two fingers. An object of this class is usually handy to change the transform             of   a      view, and more specifically its scale. Using pinch gestures for example, you can                     implement zoom        in and out to photos on your own apps.

   5. UIRotationGestureRecogniser: In accordance to the previous gesture, rotation is used to rotate a         view using two fingers.

   6. UILongPressGestureRecogniser: An object of that class monitors for long press gestures                     happening on a view. The pressing must last long enough in order to be detected, and the finger           or      fingers should not move a lot around the pressed point otherwise the gesture fails.

   7. UIScreenEdgePanGestureRecogniser: This one is similar to the swipe gesture, but with a great          difference: The finger movement should always begin from an edge of the screen.

All gesture objects perform an action once a valid gesture is detected. This is either an IBAction method in case the gesture is added through Interface Builder, or a private or public method in case of programmatic implementation. When an action method is called after a gesture has happened, the gesture object always sends itself in case additional info is required when handling the gesture. It’s not necessary to declare the action methods in order to contain the gesture object to their signatures, but personally I recommend you to do so as a better programming practice. For example, both of the following method signatures are valid:

-(void)handleMyTapGesture;
-(void)handleMyTapGestureWithGestureRecogniser:(UITapGestureRecogniser *)gestureRecogniser;

In the second case, the gestureRecogniser argument can provide you with extra info you might need, such as the view that the gesture took place on. In my examples in this tutorial, I’ll use the second method signature when declaring methods for handling gestures.

Looking now from the point of views, a view can contain more than one gesture recognisers. For instance, you can add both pinch and rotation gesture recognisers to an imageview, so you can zoom in/out and rotate the presented image. However, just one gesture at a given time can occur. Gesture recognisers that are related to a view are added to an array of that view, so you can access them as you would do for any object to a normal array. I guess though that you will rarely need to access a gesture recogniser object in such way.

That’s because they have great similarities to other classes, so it would be pointless to discuss about the same stuff twice. Furthermore, I said previously that the gesture recognisers can be added in two ways to views: Either using the Interface Builder, or programmatically. In here I am going to follow the second path, and do everything in code. If you want to read more theoretical stuff about gesture recognisers, then feel free to pay a visit to the official documentation provided by Apple.

Demo App Overview

The way that we are going to work in this tutorial is straightforward enough. First of all, we will create a tabbed application, which will contain five tabs. Each tab will match to a single view controller, and each view controller will be used to demonstrate a new gesture recogniser. Furthermore, we will create five view controller classes to implement the necessary code for every gesture recogniser that we will meet.

The gesture recognisers that we are going to work with are in the following order:

    1 .Tap gesture recogniser

    2. Swipe gesture recogniser

    3. Pan gesture recogniser

    4. Pinch gesture recogniser

    5. Rotation gesture recogniser

For each one of them, we are going to create one or more test views, and then we will implement the necessary code that will make the respective gestures properly work. So, I could say that the demo application of this tutorial will be parted by many small examples, where each one of them targets for the study of a single gesture recogniser.

App Creation and Setup

Let’s begin by creating a new project in Xcode. Launch it, and in the first step of the guide select the Tabbed Application as the template for your project.

Next, set the GesturesDemo as the name of the project, and make sure that the iPhone is the selected device.

Get finished with the guide by selecting a directory to store the project, and you are ready.

Now that the project has been prepared, let’s setup our interface so we can start working with the gesture recognisers at once later. Click on the Main.storyboard file, and let the Interface Builder appear. As you notice, there are two view controllers connected to the tab bar controller already, so we need to add three more. From the Object Library, drag and drop three new View Controller objects to the IB canvas.

Before we connect the view controllers to the tab bar, we must create the necessary view controller classes. Begin by selecting the FirstViewController and SecondViewController classes (both the .h and .m files) on the Project Navigator, and then hit the Delete key on your keyboard. We are not going to use these files, we will create new classes for the gesture recognisers that we’ll work with.

In the confirmation dialog box that shows up, click on the Move to Trash button.

Next, start adding the new classes to the project. The procedure that will be presented right next should be repeated five times in total, until all the necessary classes to have been added to the project. Let’s see everything in details for the first one:

Open the File > New > File… menu, and select to create a new Objective-C class in the guide that appears.

Move to the next step, and in the Subclass of field, make sure that the set value is the UIViewController one. If not, then do it now. Next, name the new class by setting the TapViewController value in the Class field.

Proceed and in the last step click on the Create button to get finished with the guide and let Xcode create and add the new class to the project.

For the next view controllers that you will create, set the following class names:
  • SwipeViewController
  • PanViewController
  • PinchViewController
  • RotationViewController
After you have added all of them, here’s how your Project Navigator should look like:


Now we can return to Interface Builder. Firstly, make sure that the Utilities pane is on, because you will need it. Next, select the first view controller scene (let’s start from top to bottom), and show the Identity Inspector in the Utilities pane. In there, in the Class field of the Custom Class section, set the name of the first class you added to the project, the *TapViewController:

Repeat the above step until you set all the custom classes to the remaining view controllers.

Our next move is to connect all the view controllers to the tab bar controller. This is can be done very easily, if you select the tab bar controller and then you open the Connections Inspector to the Utilities pane. In the Triggered Segues section, click on the circle on the right of the view controllers option and drag on top of every not connected view controller, and proceed by connecting them one by one.

Once all the connections have been made, you can set the titles of the bar button items of the view controllers. Starting from the top to bottom once again, select the tab bar item and then open the Attributes Inspector in the Utilities pane. In there, set the proper tab bar title for each view controller, and set the first as the image to all view controllers.

The tab bar item titles are in order:
  • Tap
  • Swipe
  • Pan
  • Pinch
  • Rotation
Finally, remove the existing content from the first two pre-made view controllers.

Everything is ready now. Optionally, you can add a label as a title to each view controller. If you would like to do that too, then drag a UILabel object to each view controller and set the following attributes:
  • Frame: X=20, Y=40, Width=280, Height=40
  • Font: System Bold, 20pt
  • Alignment: Center
The texts of the labels should be (following the same order as before):
  • Tap Gesture
  • Swipe Gesture
  • Pan Gesture
  • Pinch Gesture
  • Rotation Gesture
Now that we have setup the base that we’ll work on, we can dive directly in the heart of our topic. However, we’ll return several times here in Interface Builder with an aim to create the needed test views for each gesture recogniser.

Tap Gesture Recogniser

We begin the real exploration of our today’s topic with the tap gesture recogniser, as you assume from the title of this section. In the previous part, you added all the needed view controllers to the project and you connected them with the tab bar controller. Now in this part, we are going to add a view object (UIView) to the Tap view controller scene, which we will use as a test view in order to do our work in code.

At first, make sure that you have the Interface Builder still on by clicking on the Main.storyboard file in the Project Navigator. Next, bring the Tap view controller scene right in front of you, and then grab an UIView object from the Object Library and drag it to the scene’s view. Set the following properties to that view:
  • Frame: X=110, Y=234, Width=100, Height=100
  • Background Color: R=215, G=116, B=52 (or any other color you like)
Now that the view is in place, we must create an IBOutlet property and connect it with the view. Open the TapViewController.h file and declare the following property:

@interface TapViewController : UIViewController
@property (weak, nonatomic) IBOutlet UIView *testView;
@end

Back on the Main.storyboard file again, go to the Document Outline pane, and Ctrl-Drag from the Tap view controller scene object to the view.

In the black popup window, just select the testView property, and you are all set.

Note: We are going to use the above procedure for adding test view objects and for connecting them with IBOutlet properties to the upcoming chapters as well. However, I’m not going to get into much detail again, so if you need you may return back here to follow the steps that were just described.

Now, open the TapViewController.m file. The first thing we must do, is to create an object of the UITapGestureRecogniser class, which we’ll then add to the testView view. Actually, we will create two objects of that class, one for testing single taps, and one for testing double taps. Our work will take place initially to the viewDidLoad method, so let’s get started with the first one.

The next code segment clearly displays how we initialise a gesture recogniser object:

UITapGestureRecogniser *singleTapGestureRecogniser = [[UITapGestureRecogniser alloc] initWithTarget:self action:@selector(handleSingleTapGesture:)];

Using that way, you can initialise any object of the gesture recogniser subclasses, as long as you replace the name of the class. As you see, we specify a target and an action. The action in this case is a private method that we are about to create in a few seconds.

To assign the above gesture recogniser to our test view, here’s what we should write:

[self.testView addGestureRecogniser:singleTapGestureRecogniser];

The addGestureRecogniser: method is the one and standard way of adding a gesture recogniser object to a view.

Go now to the private section of the interface, and declare the private method we set to the gesture recogniser as follows:

@interface TapViewController ()
-(void)handleSingleTapGesture:(UITapGestureRecogniser *)tapGestureRecogniser;
@end

As I said in the introduction, the gesture recogniser object passes itself to the selector method, and by using this method signature we can use it later in the implementation. We could omit the parameter, but that’s something that I am intentionally going to avoid to the whole tutorial.

Let’s move forward to the method’s definition now. What are we going to do there? Well, nothing hard especially. We will just double the width of the test view when we tap on it once, and we will revert the original width value upon the second tap (an so on). Let’s see it:

-(void)handleSingleTapGesture:(UITapGestureRecogniser *)tapGestureRecogniser
{
    CGFloat newWidth = 100.0;
    if (self.testView.frame.size.width == 100.0) {
    newWidth = 200.0;
     }

   CGPoint currentCenter = self.testView.center;

   self.testView.frame = CGRectMake(self.testView.frame.origin.x, self.testView.frame.origin.y,             newWidth, self.testView.frame.size.height);
   self.testView.center = currentCenter;
}

The implementation is really simple. At first we check if the initial width of the view is equal to 100.0 points, and if so we make it 200.0, otherwise we keep the initial assigned value (100.0). Next, we keep to a CGPoint structure variable the current center point, we change the width of the view and we center it again.

It’s now time to try out the tap gesture. Run the app, and once it gets launched to either the Simulator or a device, make sure that you are on the first tab. Tap on the view once, and its size will change. Tap once again, and watch it going back to its original state. Simple and cool?

Let’s go back to our work again. As I have already said, a tap gesture can be performed with one or more fingers, and the gesture could require one or more taps. So, let’s see one more example, where this time we will “tell” to the gesture recogniser object that we want two taps to happen, and that two fingers are required in order to perform the predefined action. In the viewDidLoad method, add the next lines:

- (void)viewDidLoad
{
...
UITapGestureRecogniser *doubleTapGestureRecogniser = [[UITapGestureRecogniser alloc] initWithTarget:self action:@selector(handleDoubleTapGesture:)];

doubleTapGestureRecogniser.numberOfTapsRequired = 2;
doubleTapGestureRecogniser.numberOfTouchesRequired = 2;
[self.testView addGestureRecogniser:doubleTapGestureRecogniser];
}

Here we initialise a new tap gesture recogniser object, and we specify another action method that we’ll implement in a while. The new thing here though is the use of the two properties that allow us to set the number of the required taps and touches (or in other words, the number of fingers). Finally, we add the recogniser object to the testView view.

Note that our test view now contains two gesture recogniser objects.

Now, let’s get finished with the remaining tasks. Go to the private interface section and declare the new private method:

@interface TapViewController ()
...
-(void)handleDoubleTapGesture:(UITapGestureRecogniser *)tapGestureRecogniser;
@end

In its definition, we will change both the width and height of the view by doubling its size. We will follow the same logic as before:

-(void)handleDoubleTapGesture:(UITapGestureRecogniser *)tapGestureRecogniser
{
     CGSize newSize = CGSizeMake(100.0, 100.0);
     if (self.testView.frame.size.width == 100.0) {
          newSize.width = 200.0;
          newSize.height = 200.0;
      } 

    CGPoint currentCenter = self.testView.center;

    self.testView.frame = CGRectMake(self.testView.frame.origin.x, self.testView.frame.origin.y,                     newSize.width, newSize.height);
    self.testView.center = currentCenter;
}

Run the app once again. This time, double-tap and use two fingers, otherwise the gesture will fail.

What you have seen in this part of the tutorial, is more or less the way you work with all the gesture recognisers, even though each one of them has something special about it. For now, we have successfully managed to implement and use tap gestures, and that’s quite important!

Swipe Gesture Recogniser

Another quite common and cool gesture recogniser is the swipe. Swiping can be done towards any of the four basic directions (right, left, top, bottom) but not in a diagonal way. The UISwipeGestureRecogniser class provides a method that allows us to specify the direction, and if none is set, then the right direction is used by default. A swipe gesture recogniser object can monitor and trigger actions for one direction only.

That means that if you want a view in your application to support swiping towards two or more directions, then you must create two or more gesture recogniser objects respectively. Beyond all that, note that the action that’s been triggered by the swipe movement starts right after the swiping is over (when the finger actually stops sliding).

In this part we are going to work with the SwipeViewController class that we previously added to the project. In the respective scene in the Interface Builder, we are going to add three view (UIView) objects. The width of all three views will be equal to the screen’s width. The first view will be placed on-screen, while the other two views will be placed at the left and the right side of the first view, and obviously will be out of the visible area. Our goal is to make these views move horizontally using swipe gestures, and to let the hidden views to be revealed by sliding either left or right.

Let’s see everything step by step, and let’s start by opening the Main.storyboard file. Go to the Swipe view controller scene and add the next three view objects by defining at the same time the frame and background color properties:

First View
  • Frame: X=0, Y=234, Width=320, Height=100
  • Background Color: R=215, G=116, B=52
Second View
  • Frame: X=320, Y=234, Width=320, Height=100
  • Background Color: Black Color
Third View
  • Frame: X=-320, Y=234, Width=320, Height=100
  • Background Color: R=0, G=128, B=0
Next, create the following IBOutlet properties to the SwipeViewController.h file:

@interface SwipeViewController : UIViewController
    @property (weak, nonatomic) IBOutlet UIView *viewOrange;
    @property (weak, nonatomic) IBOutlet UIView *viewBlack
    @property (weak, nonatomic) IBOutlet UIView *viewGreen;
@end

After having done so, go back to the Swipe view controller scene, and make the proper connections. Obviously, the viewOrange property matches to the first view, the viewBlack property matches to the second view, and the viewGreen property matches to the third view.

Now, let’s head to the SwipeViewController.m file, and let’s start creating the gesture recognisers we need one by one. Go to the viewDidLoad method, and initialize such an object for the first view (the one in the middle being in the visible area of the screen). Let’s see that:

- (void)viewDidLoad
{
    [super viewDidLoad];
   // Do any additional setup after loading the view.

   UISwipeGestureRecogniser *swipeRightOrange = [[UISwipeGestureRecogniser alloc]                       initWithTarget:self action:@selector(slideToRightWithGestureRecogniser:)];
   swipeRightOrange.direction = UISwipeGestureRecogniserDirectionRight;
}

What we have done with this segment is quite clear: Upon the object initialization we specify the action method that should be called when the swiping will occur, and then we set the direction of the swipe gesture towards right.

Now, let’s create one more gesture recogniser object that will enable us to swipe towards left:

- (void)viewDidLoad
{
...
UISwipeGestureRecogniser *swipeLeftOrange = [[UISwipeGestureRecogniser alloc] initWithTarget:self action:@selector(slideToLeftWithGestureRecogniser:)];
swipeLeftOrange.direction = UISwipeGestureRecogniserDirectionLeft;
}

Easy, right? What we have only left to do, is to add both of these gesture recognisers to the viewOrange view exactly as shown below:

- (void)viewDidLoad
{
   ...
   [self.viewOrange addGestureRecogniser:swipeRightOrange];
   [self.viewOrange addGestureRecogniser:swipeLeftOrange];
}

The action methods we set to the recognisers above, should perform one simple thing: To slide all views either towards left or towards right, so when a view leaves the visible area of the screen, another one to be appeared. Let’s declare them first and we’ll do the implementation next. In the private interface section add the next lines:

@interface SwipeViewController ()
      -(void)slideToRightWithGestureRecogniser:(UISwipeGestureRecogniser *)gestureRecogniser;
      -(void)slideToLeftWithGestureRecogniser:(UISwipeGestureRecogniser *)gestureRecogniser;
@end

Let’s see the implementation of the first method straight away:

-(void)slideToRightWithGestureRecogniser:(UISwipeGestureRecogniser *)gestureRecogniser
{
     [UIView animateWithDuration:0.5 animations:^{
     self.viewOrange.frame = CGRectOffset(self.viewOrange.frame, 320.0, 0.0);
     self.viewBlack.frame = CGRectOffset(self.viewBlack.frame, 320.0, 0.0);
     self.viewGreen.frame = CGRectOffset(self.viewGreen.frame, 320.0, 0.0);
   }];
}

As you notice, when we swipe towards right, we want the X origin point of each view to be increased by 320.0 pixels and manage that way to move our views to the right. We make this movement look natural simply by wrapping everything into an animation block. Notice also that the movement speed depends on the animation duration, so if you want a slower the slide effect you should increase the animation duration, while if you need them to move faster, then just decrease the duration.

The second action method is going to be similar to this one, with one difference only: The offset on the X axis will be a negative number (equal of course to 320.0 pixels), so the views move to the left. Let’s see this implementation as well:

-(void)slideToLeftWithGestureRecogniser:(UISwipeGestureRecogniser *)gestureRecogniser
{
     [UIView animateWithDuration:0.5 animations:^{
     self.viewOrange.frame = CGRectOffset(self.viewOrange.frame, -320.0, 0.0);
     self.viewBlack.frame = CGRectOffset(self.viewBlack.frame, -320.0, 0.0);
     self.viewGreen.frame = CGRectOffset(self.viewGreen.frame, -320.0, 0.0);
    }];
}

Run the app once again, and this time show the contents of the second tab. Swipe towards right or left, and watch the views slide in and out using an animated fashion. However, when a new view appears on-screen, you’ll notice that no swipe gesture works any more. Why that happens?

The answer is simple and lies in the fact that we haven’t created and added swipe gesture recognisers to the other two views. So, why don’t we do that now, and then see if it works?

Go back to the viewDidLoad method, and start by adding the next segment:

- (void)viewDidLoad
{
    ...
    UISwipeGestureRecogniser *swipeRightBlack = [[UISwipeGestureRecogniser alloc]                          initWithTarget:self action:@selector(slideToRightWithGestureRecogniser:)];
    swipeRightBlack.direction = UISwipeGestureRecogniserDirectionRight;
    [self.viewBlack addGestureRecogniser:swipeRightBlack];
}

With these commands, we created a new swipe gesture recogniser for a gesture towards right for the black coloured view, and we used the already implementedslideToRightWithGestureRecogniser : as the action method.

Let’s do the same for the green coloured view, but this time we must set the left direction:

- (void)viewDidLoad
{
    ...
    UISwipeGestureRecogniser *swipeLeftGreen = [[UISwipeGestureRecogniser alloc]                            initWithTarget:self action:@selector(slideToLeftWithGestureRecogniser:)];
    swipeLeftGreen.direction = UISwipeGestureRecogniserDirectionLeft;
    [self.viewGreen addGestureRecogniser:swipeLeftGreen];
}

Okay, let’s give it another try. This time, everything works great!

Keep in mind that for every swipe gesture you want to support, you must create a new gesture recogniser object. Creating just one object and adding it to more than one view isn’t going to work. If you really want a proof of that, simply go to the viewDidLoadMethod and add the swipeRightOrange and the swipeLeftOrange gesture recognisers to the other two views respectively. Run the app again, and then swipe your finger (or using the mouse on Simulator) just like before. Unfortunately, this time nothing will work, so set everything back to its original state.

Pan Gesture Recogniser

In the last two sections we saw two important gesture recognisers that can give great interaction to your apps. We will continue here by studying another gesture recogniser, the pan or in other words drag. This gesture is handy when you want to allow your users to drag views around the screen. In this part, except for implementing the necessary code that will enable our app to support the pan gesture, we will meet a special method of the UIPanGestureRecogniser class, named velocityInView:. This method returns a CGPoint value, and informs us about how many points per second the dragged view travels in both the horizontal and vertical axis while dragging. This information could be useful in some cases, so we will see how to access it.

Just like before, we’ll start by adding a test view to the Interface Builder. Go to the Main.storyboard file, and then drag a view (UIView) object to the Pan view controller scene. Set the next two attributes of it:
  • Frame: X=110, Y=234, Width=100, Height=100
  • Background Color: R=215, G=116, B=52 (or any other color you like)
Next, go to the PanViewController.h file and declare an IBOutlet property that you’ll later connect to that view:

@interface PanViewController : UIViewController
         @property (weak, nonatomic) IBOutlet UIView *testView;
@end

Return to the Main.storyboard file, and connect the IBOutlet property to the view.

Once you have finished working with the Interface Builder, click on the PanViewController.m file in the Project Navigator. The first step we should do, is to create a pan gesture recogniser and add it to our test view. There’s nothing difficult in this case, so let’s see it:

- (void)viewDidLoad
{
    ...
    UIPanGestureRecogniser *panGestureRecogniser = [[UIPanGestureRecogniser alloc]                          initWithTarget:self action:@selector(moveViewWithGestureRecogniser:)];
    [self.testView addGestureRecogniser:panGestureRecogniser];
}

Now, let’s declare and implement the method we set as the action that should be taken when the pan gesture will happen. In the private interface section, add this:

@interface PanViewController ()
    -(void)moveViewWithGestureRecogniser:(UIPanGestureRecogniser *)panGestureRecogniser;
@end

As I have already said, what we want to do here is to drag the test view while we move our finger on the screen. The easiest approach to that would be to update the center point of the view as long as the panning occurs. Let’s see how that is translated into code:

-(void)moveViewWithGestureRecogniser:(UIPanGestureRecogniser *)panGestureRecogniser
{
     CGPoint touchLocation = [panGestureRecogniser locationInView:self.view];
     self.testView.center = touchLocation;
}

Every gesture recogniser contains a method named locationInView:. This method returns a CGPoint value representing the point of the view that the user touched. In our case, by calling this method we manage to get the touched point during dragging, and to make our app aware of the finger’s movement. So, what we only need is to set that touch location as the value for the center point of the test view, and that’s exactly we perform using the second command above.

Run the app now and place your finger or the mouse pointer on the test view. Then start dragging around and notice how the view follows the movement you make.

The approach I presented above is simple for the sake of the tutorial. In a real application, you might want to enrich the movement of the view by adding acceleration or deceleration when you start or stop dragging, or anything else you need. It’s up to you to apply the proper logic in the action method you implement.

At the beginning of this section, I said that there is a special method of the UIPanGestureRecogniser class named velocityInView:. Up to this point we totally ignored it, but now we’ll see how we can access it and get the data it provides. For the sake of the demo, return to the Interface Builder by clicking on the Main.storyboard file. Locate the Pan view controller scene, and add two labels with the following attributes:

Label #1
  • Frame: X=20, Y=445, Width=280, Height=21
  • Text: Nothing (empty)
  • Font size: 14pt
Label #2
  • Frame: X=20, Y=479, Width=280, Height=21
  • Text: Nothing (empty)
  • Font size: 14pt
As you may suspect, we are going to use these two labels to display the velocity in the horizontal and vertical axis respectively. But before we do so, we must create and connect two IBOutlet properties to these labels. So, open the PanViewController.h file and add the next two lines:

@interface PanViewController : UIViewController
     ...
    @property (weak, nonatomic) IBOutlet UILabel *horizontalVelocityLabel;
    @property (weak, nonatomic) IBOutlet UILabel *verticalVelocityLabel;
@end

Go back to the Interface Builder, and perform the appropriate connections.

Now, open the PanViewController.m file, and go to our private action method named moveViewWithGestureRecogniser:. We will add some code here that will enable us to get the velocity of the drag as a CGPoint value, and then we will extract the velocity on each axis out of this one. Remember that the velocity is expressed in points per second. Let’s see the method:

-(void)moveViewWithGestureRecogniser:(UIPanGestureRecogniser *)panGestureRecogniser{
     ...
     CGPoint velocity = [panGestureRecogniser velocityInView:self.view];

     self.horizontalVelocityLabel.text = [NSString stringWithFormat:@"Horizontal Velocity: %.2f                   points/sec", velocity.x];
     self.verticalVelocityLabel.text = [NSString stringWithFormat:@"Vertical Velocity: %.2f                              points/sec", velocity.y];
}

The value returned by the velocityInView: method is stored to a CGPoint structure. Then by simply accessing the X and Y properties of that structure, we manage to get the horizontal and vertical velocity. To keep things simple here, we just display these values, but in a real application the velocity would be useful if only you would perform calculations based on it.

By running the app again you can see how “fast” the test view is moved around the screen while you drag it.

Pinch Gesture Recogniser

The pinch gesture is useful for changing the transform of a view by scaling it up or down. The most characteristic example of that gesture can be found in the Photos app, where you pinch to zoom in and out. Here we won’t add an image view with an image in it, we’ll just use a simple view.

However, what we’ll do here apply to all views (including image views) in which you want to change their scale value. The great difference that the pinch gesture has compared to the previous gesture recognisers, is the fact that it requires two fingers to be used in order to perform the gesture.

As we did in the previous sections, we will begin by adding a test view to the Interface Builder. On the Project Navigator, click on the Main.storyboard file and then locate the Pinch view controller scene. Next, from the Object Library drag a view object (UIView) to the canvas, and set the next attributes:
  • Frame: X=85, Y=209, Width=150, Height=150
  • Background Color: R=215, G=116, B=52 (or any other color you like)
Now open the PinchViewController.h file, and declare an IBOutlet property:

@interface PinchViewController : UIViewController
    @property (weak, nonatomic) IBOutlet UIView *testView;
@end

Finally, in the Interface Builder connect that property to the test view you just added.

Similarly to the previous cases, we’ll begin the implementation in the viewDidLoadMethod in the PinchViewController.m file. What we only have to do is to initialise a pinch gesture recogniser object, and then add it to the test view:

- (void)viewDidLoad
{
     ...
     UIPinchGestureRecogniser *pinchGestureRecogniser = [[UIPinchGestureRecogniser alloc]               initWithTarget:self action:@selector(handlePinchWithGestureRecogniser:)];
     [self.testView addGestureRecogniser:pinchGestureRecogniser];
}

Now, we must declare the handlePinchWithGestureRecogniser: method, and then define it. First, go to the private class section:

@interface PinchViewController ()
  -(void)handlePinchWithGestureRecogniser:(UIPinchGestureRecogniser *)pinchGestureRecogniser;
@end

According to what I said in the beginning of this part, we will modify the transform of the test view by changing the scale value. That action will result in a zoom in/out effect, and as you’ll see, it’s just a matter of a single line:

-(void)handlePinchWithGestureRecogniser:(UIPinchGestureRecogniser *)pinchGestureRecogniser
{
     self.testView.transform = CGAffineTransformScale(self.testView.transform,                                                pinchGestureRecogniser.scale, pinchGestureRecogniser.scale);
}

In this example we know which is the view that the pinch gesture was applied to (the testView view), so we access it directly. However, there will be times that you will need to access the pinched view in a more generic way. In that case, you can avoid the direct usage of the the view that was pinched as shown above, if you simply use the view property of the pinchGestureRecogniser parameter object. This property is actually the view that the pinch gesture happened to. Therefore, the above command could be written as follows too:

pinchGestureRecogniser.view.transform = CGAffineTransformScale(pinchGestureRecogniser.view.transform, pinchGestureRecogniser.scale, pinchGestureRecogniser.scale);

Now we are almost ready to test it. I’m saying almost, before we need to add one more command that it’s not obvious at the beginning, but you understand that you need it after having tested the above at least once. So, complete the above method by adding this:

-(void)handlePinchWithGestureRecogniser:(UIPinchGestureRecogniser *)pinchGestureRecogniser{
     ...
     pinchGestureRecogniser.scale = 1.0;
}

It’s necessary to reset the scale value of the pinch gesture recogniser object, otherwise the result won’t be the expected one; the scaling will look chaotic. With this simple line, we manage to achieve a smoother behaviour.

Now you can run the app and test the pinch gesture. Act as you do to photos when zooming in and out, and see how our test view reacts to your movements.

Rotation Gesture Recogniser

The rotation gesture recogniser has great similarities to the pinch gesture recogniser, as it requires two fingers in order for the gesture to be successful, and it changes the transform of the view that is applied to by modifying its rotation. Usually, the rotation gesture is used in combination to other gestures, but not only.

In this part we are going to perform almost the same steps we did in the pinch gesture recogniser section. Therefore, open the Interface Builder, and go to the Rotation view controller scene. From the Object Library, drag and drop a view to the canvas with the following attributes:
  • Frame: X=85, Y=209, Width=150, Height=150
  • Background Color: R=215, G=116, B=52 (or any other color you like)
Next, in the RotationViewController.h file declare the following IBOutlet property…

@interface RotationViewController : UIViewController
      @property (weak, nonatomic) IBOutlet UIView *testView;
@end

… and then return to the Interface Builder to connect it to the view.

Going to the RotationViewController.m file now, head to the viewDidLoad directly. As we did to all the previous parts, our coding work will start at this point. Here, we will just create a new rotation gesture recogniser object and we will add it to the test view.

- (void)viewDidLoad
{
     ...
     UIRotationGestureRecogniser *rotationGestureRecogniser = [[UIRotationGestureRecogniser             alloc] initWithTarget:self action:@selector(handleRotationWithGestureRecogniser:)];
     [self.testView addGestureRecogniser:rotationGestureRecogniser];
}

As always, our next step is to declare the private method:

@interface RotationViewController ()
-(void)handleRotationWithGestureRecogniser:(UIRotationGestureRecogniser *)rotationGestureRecogniser;
@end

Finally, let’s implement it. Here, we will use the CGAffineTransformRotate method to change the rotate value of the transform of the test view. As you will see in the next code, the rotation property of the rotationGestureRecogniser gets its initial value so as to avoid unexpected behaviour.

-(void)handleRotationWithGestureRecogniser:(UIRotationGestureRecogniser *)rotationGestureRecogniser
{
    self.testView.transform = CGAffineTransformRotate(self.testView.transform,                                        rotationGestureRecogniser.rotation);
    rotationGestureRecogniser.rotation = 0.0;
}

Run the app now, and make sure to select the last tab of the tab bar controller. Then use two fingers to rotate the view to either a clockwise or counter-clockwise direction.

Other Gesture Recognisers

Beyond the gesture recognisers we met in the previous parts of the tutorial, there are two more that I’m going to just mention. We won’t see their details for two reasons: They are used more rarely and have similarities to gesture recognisers that we have already seen. So, let’s go through them shortly.

The first one is the long press gesture recogniser, and the respective class is the UILongPressGestureRecogniser*. As its name suggests, an object of that class monitors for press on a view that lasts for a some period, and not as long as a tap lasts. This class contains three important properties:
  • minimumPressDuration: With this one, you specify the minimum time period that should elapse until the gesture to be considered valid.
  • numberOfTouchesRequired: By using this property, you specify how many fingers are required for the gesture. Usually just one finger is fine, but it’s up to you to decide.
  • allowableMovement: This property defines an area that the fingers should move during pressing. When using that gesture, the fingers must remain as stable and steady to the touch point as possible, otherwise the gesture fails.
So, if you want to use that gesture recogniser, always keep in mind that you must set valid values to the above properties.

The second gesture recogniser that I will just mention, is the screen edge pan gesture recogniser. The UIScreenEdgePanGestureRecogniser class is new in iOS 7, and it works just like the swipe gesture recogniser, with one great difference though: The finger movement should begin near the edge of the screen.

Also, navigation controller supports this gesture recogniser by default. There’s one special property that should be set before this gesture recogniser is used, named edges. Using it, you must specify the edge from which the gesture should begin.


Comments

Popular Posts

How I Reduced the Size of My React Native App by 85%

How and Why You Should Do It I borrowed 25$ from my friend to start a Play Store Developer account to put up my first app. I had already created the app, created the assets and published it in the store. Nobody wants to download a todo list app that costs 25mb of bandwidth and another 25 MB of storage space. So today I am going to share with you how I reduced the size of Tet from 25 MB to around 3.5 MB. Size Matters Like any beginner, I wrote my app using Expo, the awesome React Native platform that makes creating native apps a breeze. There is no native setup, you write javascript and Expo builds the binaries for you. I love everything about Expo except the size of the binaries. Each binary weighs around 25 MB regardless of your app. So the first thing I did was to migrate my existing Expo app to React Native. Migrating to React Native react-native init  a new project with the same name Copy the  source  files over from Expo project Install all de...

How to recover data of your Android KeyStore?

These methods can save you by recovering Key Alias and Key Password and KeyStore Password. This dialog becomes trouble to you? You should always keep the keystore file safe as you will not be able to update your previously uploaded APKs on PlayStore. It always need same keystore file for every version releases. But it’s even worse when you have KeyStore file and you forget any credentials shown in above box. But Good thing is you can recover them with certain tricks [Yes, there are always ways]. So let’s get straight to those ways. 1. Check your log files → For  windows  users, Go to windows file explorer C://Users/your PC name/.AndroidStudio1.4 ( your android studio version )\system\log\idea.log.1 ( or any old log number ) Open your log file in Notepad++ or Any text editor, and search for: android.injected.signing and if you are lucky enough then you will start seeing these. Pandroid.injected.signing.store.file = This is  file path where t...

Video Calling In IOS Objective C

Video Calling Sources Project homepage on GIT — https://github.com/QuickBlox/quickblox-ios-sdk/tree/master/sample-videochat-webrtc Download ZIP - https://github.com/QuickBlox/quickblox-ios-sdk/archive/master.zip Overview The VideoChat code sample allows you to easily add video calling and audio calling features into your iOS app. Enable a video call function similar to FaceTime or Skype using this code sample as a basis. It is built on the top of WebRTC technology.            System requirements The QuickbloxWebRTC.framework supports the next:     * Quickblox.framework v2.7 (pod QuickBlox)     * iPhone 4S+.     * iPad 2+.     * iPod Touch 5+.     * iOS 8+.     * iOS simulator 32/64 bit (audio might not work on simulators).     * Wi-Fi and 4G/LTE connections. Getting Started with Video Calling API Installation with CocoaPods CocoaPods is a dependency manag...