scenekit - zoom in/out to selected node of scene - ios

I have a scene in which a human body is displayed. I want to zoom in to a specific body part when a user taps on it.
I changed the position of the camera to the position of Node but it points not exactly on it.
Also I need to keep the selected part in center of the screen when zoom in.
How can I accomplish zoom in / out?

What you want to do is set a tapGestureRecognizer on each child node and when selecting them you want to call node.scale
#objc func nodeSelected(_ node: SCNNode) {
node.scale = SCNVector3Make(node.worldPosition.x, node.worldPosition.y, node.worldPosition.z/2)


Xcode 9 Swift 4 - Reposition views by dragging

Im quite new to iOS development. But have years of programming experience.
Anyway, Im having a hard time finding a solution for my problem.
In my app i render rows of colored circles based on data from the server.
Each of these circles has different properties set to them on the server.
One of these is the "offset" property.
This should be used to render the circle with a distance from its left sibling, or the start of the parent view if its the first.
Each circle should then also be able to be moved by dragging it to the right or left. But never less then 0 from its left sibling.
In android this was very easy, just set the left-margin on drag, and all was good.
But in xcode im having a very hard time figuring out how to get this done.
Im sure its me thats way to inexperienced. So I hope someone that has a bit more knowledge about swift can help me with this.
Heres some images to make clear what Im looking to achive.
First render where one circle has an offset
The gesture where the 3. last circle is drages to the right
The result of the gesture
I need this to move seamless, so not reposiotioning after the gesture ends, but move along with the finger.
As you can see, the circles right of the one that is drages, keep their relative position to the one that is moved.
Thank you.
There are couples of ways to do this.The First possible solution can be using the Swipe gestures to move the objects.
override func viewDidLoad() {
let swipeGesture = UISwipeGestureRecognizer(target: self, action: "handleSwipe:")
swipeGesture.direction = [.Down, .Up]
func handleSwipe(sender: UISwipeGestureRecognizer) {
Use these Gestures to move along the objects with your fingers either you can use .left and .right gestures depending upon your need.
The Second solution for drag components can be a Pan Gesture
func detectPan(recognizer:UIPanGestureRecognizer) {
var translation = recognizer.translationInView(self.superview!) = CGPointMake(lastLocation.x + translation.x, lastLocation.y + translation.y)
The translation variable detects the new coordinates of the view when panning. The center of the view will be adjusted according to the changed coordinates.
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
// Promote the touched view
// Remember original location
lastLocation =
When the view is clicked, the current view will be displayed in front of the other views and the center of the view will be assigned to the lastlocation variable
Hope this helps.

Move objects around, with gesture recognizer for multiple Objects

I am trying to make an app where you can use Stickers like on Snapchat and Instagram. It fully worked to find a technique, that adds the images, but now I want that if you swipe the object around the object changes its position (I also want to make the scale / rotate function).
My code looks like this:
#objc func StickerLaden() {
for i in 0 ..< alleSticker.count {
let imageView = UIImageView(image: alleSticker[i])
imageView.frame = CGRect(x: StickerXScale[i], y:StickerYScale[i], width: StickerScale[i], height: StickerScale[i])
imageView.isUserInteractionEnabled = true
let aSelector : Selector = "SlideFunc"
let slideGesture = UISwipeGestureRecognizer(target: self, action: aSelector)
func SlideFunc(fromPoint:CGPoint, toPoint: CGPoint) {
Here are the high-level steps you need to take:
Add one UIPanGestureRecognizer to the parent view that has the images on it.
Implement UIGestureRecognizerDelegate methods to keep track of the user touching and releasing the screen.
On first touch, loop through all your images and call image.frame.contains(touchPoint). Add all images that are under the touch point to an array.
Loop through the list of touched images and calculate the distance of the touch point to the center of the image. Chose the image whose center is closest to the touched point.
Move the chosen image to the top of the view stack. [You now have selected an image and made it visible.]
Next, when you receive pan events, change the frame of the chosen image accordingly.
Once the user releases the screen, reset any state variables you may have, so that you can start again when the next touch is done.
The above will give you a nicely working pan solution. It's a good amount of things you need to sort out, but it's not very difficult.
As I said in my comment, scale and rotate are very tricky. I advise you to forget that for a bit and first implement other parts of your app.

How to dynamically create annotations for 3D object using scenekit - ARKit in iOS 11?

I am working on creating annotations using overlaySKScene something similar to this( I followed to create the overlay.
But in the provided example, they are creating only one annotation and it is static. I want to create multiple annotations dynamically based on the number of child nodes we have and also should be able to position annotation on top of respective child node. How to achieve this?
I am adding overlay like below,
sceneView.overlaySKScene = InformationOverlayScene(size: sceneView.frame.size)
where InformationOverlayScene is the SKScene in which i have added two childnodes to create one annotation.
Create an array with the annotation sprites that is mapped to the childnodes array, and then do something like the following:
func renderer(_ aRenderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
let scnView = self.view as! SCNView
//for each character
for var i in 0...inBattleChars.count-1 {
let healthbarpos = scnView.projectPoint(inBattleChars[i].position)
battleSKO.healthbars[i].position = CGPoint(x: CGFloat(healthbarpos.x), y: (scnView.bounds.size.height-10)-CGFloat(healthbarpos.y))
Before every frame is rendered this updates the position of an SKSprite (in healthBars) for each SCNNode in inBattleChars. The key part is where projectPoint is used to get the SK overlay scene's 2D position based on the SCNNode in the 3D scene.
To prevent the annotations of non-visible nodes from showing up (such as childnodes on the back side of the parent object) use the SCNRenderer’s nodesInsideFrustum(of:) method.
You can add a SKScene or a CALayer as a material property.
You could create a SCNPlane with a specific width and height and add a SpriteKit scene as the material.
You can find an example here.
Then you just position the plane where you want it to be and create and delete the annotations as you need them.

How to make hit area bigger in swift?

Hi im new to swift and im creating a game based on touching target but each target is an image or an SKSpriteNode they have small size but sometime touch miss the target so i want to know how to make hit area bigger without making the target bigger.
I use this code to detect touch for each target
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
/* Called when a touch begins */
for touch in touches {
let location = touch.locationInNode(self)
let node = self.nodeAtPoint(location)
if ( == "target1") {
//do some stuff
Any suggestion.
You can create a transparant clickable area which is bigger than the final image you want to show.
So for example you create:
let newNode = SKSpriteNode(bigger size, clearColor)
let originalNode = SKSpriteNode(size, finalColor/image)
You can also do this by using this in touchesbegan:
let touch = touches.first
when you know the hitarea you want for each sprite you can now calculate which sprite is the one you want to hit.
This way you have a big transparant area to work with. Same holds for UIImageView, you can add a view which is empty for the biggest part but is only there to register touches. Now you can add an UIImageView to this view, and the imageview is only there to display the image and does not have to be as big as the view.
Hope this helps ;)
A view cannot simply detect a touch outside itself. By default, the user must touch the view in order for that view to be the hit-test view and to receive touchesBegan. What is your objection to making the target bigger?
The alternative is to detect the touch elsewhere and calculate, yourself, that you want to respond with respect to this view. But you would not do that through touchesBegan on this view.
You can override func 'pointInside' of UIView class to expand clickable area.
Check this out, my post in another question.

SceneKit LookAt constraint not animating smoothly

I am having issue with the animation of animating the look at constraint. I have a scene with multiple objects that can be tapped. Once you tap on an object, the camera's position is updated to be a certain distance from the object and a look at constraint is applied.
The issue is with the animation of the look at constraint in the SCNTransaction begin and commit block. I set this code to do a look at:
// Causes the camera to look at the given target
private func cameraLookAtTarget(target: SCNNode)
let constraint = SCNLookAtConstraint(target: target)
constraint.gimbalLockEnabled = true
cameraNode.constraints = [constraint]
And the block of code where this gets called when the object is tapped:
The animation does a jump first and then animates to look at the object instead of smoothly rotating the camera to look at the new target. I can confirm that it is the look at constraint, since updating the camera position moves it smoothly.
Edit: Basically, its as though the camera is being reset first in the animation and then animating to look at the target. I want the look at constraint to animate from the current object it is looking at to the new target.
I can't figure out the problem. In Unity I can easily get a rotation that is a look rotation, and apply a slerp to it in the update method so that it slowly looks at the object.