I believe that part of the confusion about when to write tests for user stories comes because people tend to jump into talking about "tests" too quickly. There's a subtle difference between Acceptance Criteria and Acceptance Tests but often these terms are bandied about as meaning the same thing.
Liz Keogh has an excellent blog post on Acceptance Criteria vs. Scenarios where she explains that acceptance criteria are general rules covering system behaviour from which executable examples (Scenarios) can be derived. In his book "Agile Estimating and Planning", Mike Cohn calls these rules "Conditions of Satisfaction", you can simply list these as bullet points on the back of story cards. As Liz points out, you can uncover acceptance criteria by having conversations with your stakeholders about example scenarios. So acceptance criteria and example scenarios are a bit like the chicken and the egg - it's not always clear which comes first so iterate!
In contrast, acceptance tests are basically scripts - manual or automated - that detail specific steps to check a feature behaves as expected. You don't need to write out all of these tests in order to plan the next iteration. Instead, you can use acceptance criteria to clarify the scope of each user story, so the team understands just enough detail to agree on what to work on in next iteration. You can also use acceptance criteria to split "epic" stories - optional or nice-to-have acceptance criteria may be bundled into later stories.
So when do we write the acceptance criteria? Well, the whole team can get together, to flesh out the user stories with acceptance criteria, as part of iteration planning but this can take a while. Larger or distributed teams may prefer to do this activity before the planning meeting - a trio of customer, developer, and tester is usually all you need. If you work this way, remember to review the acceptance criteria with the whole team before committing to delivering a story in the next iteration.
Now we get to when to write the tests. Acceptance test scripts can be written in the same iteration as the code, before or in parallel with developing the code. If you're lucky your customer has time to sit down to do this with you or perhaps a developer and tester paired together can do this. Remember to check with your customer that you've captured the intent of the user story before going ahead and implementing the code.
Of course, I'm not suggesting that you write all of the acceptance tests for all the stories at the beginning of the iteration. Just that when work starts on a story, the first thing to do is to get really clear about what tests should pass - following an Acceptance Test-Driven Development approach - this can be done by writing one acceptance test at a time or the whole set for the story.
Summing up, explore acceptance criteria and example scenarios as part of planning and write the tests close to when you write the code. Please do post any further questions and suggestions as comments.