Actor + Control Camera + game.Touches.Touch 1.x & .y + large scene size = weirdness
Right, I'm working on a game in which I want to use touch to control the actor on the screen. I want to use a 'bigger than screen size' scenes with 'control camera' behaviour on the actor which should scroll the scene.
On the actor I have these constrain attributes:
Constraint attribute: self.Position.X to game.Touches.Touch 1.x
And same for Y.
Now, if I click on screen, the actor moves to the right place. But as soon as I go beyond camera tracking area and the camera moves, the touch position isn't accurate any more. The further on the scene I click the more off is the touch position.
I don't quite get what's going on here with the touch controls, do they not work with camera tracking or something?
Here's a little scene to show what I mean:
https://drive.google.com/file/d/0BzfOMShXSUofT2xUOVFJQW5aSGM/view
Comments
You just need to add the camera offset onto the end of your Constrain behaviours.
Unlock the actor in the scene:
Constrain self X to Touch 1.Y + Camera Origin X
Constrain self Y to Touch 1.Y + Camera Origin Y
Thank you Socks, works perfectly! I don't quite get what's going on here in terms of the logic, but I'm glad it works!
Good !
It's actually very straightforward.
When you touch the screen with your finger (or click with a mouse) the location of this click is a position on the actual physical screen, measured from the lower left hand corner.
So, regardless of your scene size, shape or where in the scene your player is, clicking right in the centre of - for example - a landscape iPad screen will return the values x512, y384, if your camera moves 50,000 pixels to the right (in a hypothetically massive scene) and you then click in the centre of the screen you will still get the values x512, y384 for where the mouse is - as the mouse/touch location is measuring where on the screen you are touching - it is not measuring where in your scene you are touching.
(GameSalad's camera is effectively the engine's representation of the device's physical screen)
If the camera is in its default position, and the scene is at the default size, then the camera and the scene share the same values, if you click in the middle of the screen(camera) you are clicking in the middle of the scene as well, and so they both share the values x512, y384 . . . which might lead some to think they are the same thing, it just so happens they are in sync, but they are measuring different things. vs where
But if we move the camera 50,000 pixels to the right we might sometimes want that click in the middle of the screen(camera) to return the value 50,512 rather than 512 . . . in these situations we will simply need to add the amount that the camera has moved by (the camera origin is what moves when you move the camera).
Hope that makes sense !
Time for a terrible analogy . . . .
You have a 40" x 22.5" TV screen, you measure the location of a position on the screen from the lower left hand corner, you touch the middle of the screen, the x position is 20".
. . . ok, now you box up the TV and take it to Germany, some 1,000 miles away, you get the TV out and then touch the centre of the screen again, the location of the touch on the screen is still x20" . . .
. . . but you want to represent your touch in relation to the TV's origin (your home), rather than just where on the screen you are touching, so all you need to say is: I am touching a location on the screen 20" across + 1,000 miles from home.
But before you can do this you are arrested and taken to a high security mental facility as this is not the first time you've turned up in Germany touching TVs and mumbling something about coordinates, you make a note for yourself, reminding you to put some trousers on before leaving the house.
Brilliant, thanks Socks, it all makes sense now! Appreciate all the help.