iOS Development With Swift Part 6
Contents
This is part 6 of the swift iOS Programming tutorials series. The first part of the series is here and the previous part of the series that is tutorial # 5 is here. In the last part we were able to play audio from a file and also change the rate of the audio to speed it up or slow it down. In this tutorial we will actually work on recording audio in our first screen and using that recorded audio in the seconds screen rather than using the hardcoded audio we have used in the last tutorial. We will also talk more about segues and navigation and make some changes to send the recorded audio to the next screen for playback.
Rename Main Controller
Right now the classes for the first and second screen are named as ViewController and PlaySoundsViewController respectively. The second screen name actually is a good name for the controller as we can understand that the second controller play sounds but the name ViewController for the first screen is not very good and descriptive and does not tell us what exactly the first controller is responsible for. As you would remember we used defaults and let Xcode choose ViewController and we did not set the name for the main controller but know we are going to change it to make it more descriptive.
- Let’s think what does the first screen do? Well, the first screen has a record button and we record the user’s audio in first screen so to make it clear in the class name what this controller does we will rename it to RecordSoundsViewController. The next steps will discuss how to make this change.
- First show the navigator window by pressing one of the icons in the top right. Select the first tab within navigator to Project Navigator which lists all files in our project.
- First click on ViewController in navigator and press enter(return) key and set the name to RecordsSoundsViewController.
- Now in RecordsSoundsViewController.swift on the second line there will be a comment with text ViewController.swift change it to RecordSoundsViewController.swift.
- In the same file change the class name from
1
ViewController: UIViewController
- to
1
RecordSoundsViewController: UIViewController
- As a last step go to your storyboard and show the utilities screen(hide the navigator screen for more room). Now select your first screen (with the record button) in storyboard and select the yellow icon that appears when you select the view controller.
- This yellow icon always selects the view controller for the scene. Now with yellow icon selected go to your utilities window on the right and select he third tab for Identity Inspector. Change the Class using dropdown from ViewController to RecordSoundsViewController
- Now run the app to make sure that the change has worked.
Recording Audio In Swift
- If you search on how to record audio in swift you will stumble across this article on stackoverflow.
- Basically if you study the code you need to perform the following steps:
- Import AVFoundation to use audio recorder.
- Make an object of audio recorded(AVAudioRecorder)
- Create a session with settings to record and play.
- Search for the directory for application to save the file.
- As we right now don’t allow user to give the name of the saved file we will need to generate a unique file name to save the recorded audio. A good way is to use the current time as the file name and append .wav at the end and append it to the directory we found in step 3.
- Finally set initialize audio recorder with the file path we found in step 4 and start the recording process.
- You can see the code for this in Udacity swift course in Lesson 4a: Recording Audio video 3 but i will paste the code here in chunks in order to explain each step. Okay let’s jump in. First make sure you have opened RecordSoundsViewController.swift and then start making the following changes.
- Add the following line to import AVFoundation beneath where we are importing UIKit just before the class is declared
1
import AVFoundation
- Next in the class declaration for RecordSoundsViewController before the function viewDidLoad declare globally audio recorder object with the following code
1
var audioRecorder:AVAudioRecorder!
- Next add all the code in the following steps in recordAudio function beneath the code for enabling disabling record button and stop button at the end of the function. I will break the code in sub-steps to explain it better.
- First we need to find the directory where the application is allowed to write to in order to save the recorded audio file. Add the following code which uses the NSSearchPathForDirectorInDomains functions for which you can read documentation here. Basically this function takes as input the directory type you want to search for and returns list of paths in string format which qualify. We are looking for the application directory to we can get access to a place the application has permission to write to and we only need need one path so we take the first one from the results returned.
let dirPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as String
- Next add the following code beneath step 1. This code actually gets the current data and time and formats the date and time in a certain format which are allowed as file names. Next it adds the .wav extension to this name at the end and adds the directory path from step 1 and converts the string file path to NSUrl format as we will need to use it when we initialize audio recorder object later.
let currentDateTime = NSDate() let formatter = NSDateFormatter() formatter.dateFormat = "ddMMyyyy-HHmmss" let recordingName = formatter.stringFromDate(currentDateTime)+".wav" let pathArray = [dirPath, recordingName] let filePath = NSURL.fileURLWithPathComponents(pathArray) println(filePath)
- Next we need to create an audio session and we do this with the following code. Read a summary of AVAudioSession here. It allows us to start an audio session and stop it. We set the category so iOS knows what type of audio session recording we want. We can start a session with record only or play only but we will need to record and later play this audio so we will set PlayAndRecord category. You can find the whole list of category in documentation here
var session = AVAudioSession.sharedInstance() session.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
- Next we initialize a audio recorder with the file path we constructed in step 2. We only need to tell ios to prepare to record and then start recording. Now the recording session has started and the device has started to record audio from the device.
audioRecorder = AVAudioRecorder(URL: filePath, settings: nil, error: nil) audioRecorder.meteringEnabled = true audioRecorder.prepareToRecord() audioRecorder.record()
- First we need to find the directory where the application is allowed to write to in order to save the recorded audio file. Add the following code which uses the NSSearchPathForDirectorInDomains functions for which you can read documentation here. Basically this function takes as input the directory type you want to search for and returns list of paths in string format which qualify. We are looking for the application directory to we can get access to a place the application has permission to write to and we only need need one path so we take the first one from the results returned.
- Next in stopAudio function add the following code which stops the audio recorder and also stops the session we started for audio recording. Now once we ask the audio recorder to stop iOS will start processing the file and make it available for use.
audioRecorder.stop() var audioSession = AVAudioSession.sharedInstance() audioSession.setActive(false, error: nil)
- Add the following line to import AVFoundation beneath where we are importing UIKit just before the class is declared
- Now finally run your app and make sure that the console is showing. As soon as you press the record icon you will see the path of the file where iOS will save the recorded file. Record some sound on your device and press stop button. Now you can even test if something is actually recorded by copying the file path from console. The file path will be of the form
file:///Users/… so just copy everything from the first slash before Users and you would get a path like /Users/username… Now i tested this file by opening terminal window and entering the command open /Users/username… where /Users is the path you copied from console. Probably the file will open iTunes or some other media player and you will be able to hear the recorded session.
- Right now if you go to the second screen and press any of the buttons you will not hear the recorded voice as we have hardcoded the path of the file to be our own file. We will need to get the file path from the recorded session in first screen to the second screen. We will tackle that next.
Changing The Flow
Right now our flow is that a user comes to the first screen, presses the microphone record button to start recording the audio, when the user presses the stop button we stop the recording and as we have a segue on our stop button the scene changes to the next screen where we can play audio at different rates. There is one serious problem with this flow. That problem is that it is possible that the user chooses to record a very large audio file. In that case when the user will click the stop button and we stop the recording session it will take some time for iPhone to process and save the file on file system. We don’t check if the phone has stopped processing before moving to the next screen and if we do decide to set the file path for the file to send to second controller it is possible that the phone is still processing the file when we start playback in second scene. In order to avoid this issue we need to make some changes by removing the segue on the button and creating a manual segue so when the user clicks the stop button we can listen for the event when the audio is processed using callback function and when that event triggers we can set the file path for second controller and call the second controller ourselves. This way we can be sure that we only call the second scene when we have finished processing the audio file. We will do that next. We will also discuss how we pass data between controllers.
- First go to storyboard and you will see an arrow pointing from Record screen to play sounds screen. This is the segue we created earlier to change scene on stop button press. Click on this arrow and click del key on keyboard to remove the segue.
- Next click the record screen and click the yellow icon on top this screen to select the controller and control drag from the yellow icon to record screen. Select show to create a manual segue.
- Now click on the new arrow between these two scenes and open up utilities and open the 4th tab i.e. Attributes inspector. In the identifier enter stopRecording. Basically when we are ready to move on to next scene we will call performSegue with this identifier in code.
Delegation
Now we want to know when the audio has stop processing. So far we don’t know a way we can do that. We will learn about a new concept in iOS development called Delegation. Basically in AVAudioRecorder there is a function audioRecorderDidFinishRecording which is called when the audio has finished processing. AVAudioRecorder basically has declared a class(protocol) with the functionaudioRecorderDidFinishRecording and you can see this delegate here. Basically AVAudioRecorder uses the delegate function audioDidFinishRecording inside it’s implementation. If we want to access this function in our class RecordSoundsViewController we need to set ourselves to listen for the event audioRecorderDidFinishRecording. Basically AVAudioRecorder implements a protocol by the name AVAudioRecorderDelegate which contains the function for audioRecorderDidFinishRecording and then AVAudioRecorder actually defines a variable named delegate with type AVAudioRecorderDelegate. In AVAudioRecoder source code somewhere it calls didfinishrecording function on it’s delegate. That is why when we say that we are the delegate in records sounds view controller and implement it we get the event for stop recording. The way we did this is as following:
- In our file RecordSoundsViewController.swift class in our class declaration we need to add AVAudioRecorderDelegate by using the following code. This code tells us that other than inheriting from UIViewController we want to access functions from AVAudioRecorderDelegate.
1
class RecordSoundsViewController: UIViewController, AVAudioRecorderDelegate
- Next we want to set ourselves as the delegate for avaudiorecorder so we can listen for when audio stops recording. Basically in our function recordAudio we initialize an object of audio recorder. We need to set the delegate property of that object to ourselves. So after the line for audioRecorder = AVAudioRecorder(URL: filePath, settings: nil, error: nil) add the following line which will let us listen for events in audio recorder.
1
audioRecorder.delegate = self
- Lastly we need to implement audioRecorderDidFinishRecording function in our class in order to know when the recording has stopped and move on to the next screen. Start typing audioRecorder and XCode will try to help you by showing dropdown and autocomplete select the audioRecorderDidFinishRecording function and Xcode should automcomplete to give you the following code.
1
func audioRecorderDidFinishRecording(recorder: AVAudioRecorder!, successfully flag: Bool) { }
Model
Now in our function audioRecorderDidFinishRecording we need to save the audio and also move on to the next scene for PlaySoundViewController. This is how we do that.
- We have so far only been using view and controller in MVC architecture. Know we will be introduced to model. Basically we will create a new class in which we will save the title for the audio and the file path and we will set this in our audioRecorderDidFinishRecording and pass it along to our play sounds view controller. This will be the data our play sounds view controller need to playback the audio. We could have just set variables in play sounds view controller but what if our data gets complex and increase in size in that case we will be setting a lot of variables in our play sounds view controller and what if we want to perform some functions on our data we would have to write a lot of code in our second controller to cater that which is not good so that is why we create own class for our data known as model and pass that between controllers to separate code related to our app data.
- For our model title will be a string and filepath will be of type nsurl and we will call the model RecordedAudio.
- Now go to File -> New -> File and select Swift File from Source Tab.
- Set the name of file as RecordedAudio.
- Add the following code to make the model with the attributes we want
class RecordedAudio: NSObject{ var filePathUrl: NSURL! var title: String! }
- Now go back to your RecordSoundsViewController file and go to your audioRecorderDidFinishRecording function.
- Make a global function in your class to create a model with the following code beneath
1
var recordedAudio:RecordedAudio!
- Add the following code in audioRecorderDidFinishFunction function. The following code will make a new model and set the title and filepathurl by using the recorded object. The recorded object url property is the file path for the recorded file and lastPathComponent retrieves the filename.
1 2 3
recordedAudio = RecordedAudio() recordedAudio.title = recorder.url.lastPathComponent recordedAudio.filePathUrl = recorder.url
- Next to move on to the next screen we will call the segue we created between record view controller and play view controller.
1
self.performSegueWithIdentifier("stopRecording", sender: recordedAudio)
- Actually the audioRecorderDidFinishRecording function has a boolean flag as input to denote if the file was saved successfully or not so we should only initialize the segue if the file was actually recored without errors and otherwise if there was an error we should just restore the recording button and hide the stop button so the user can record again. Change the code in this function to the following code. We are sending the model in the segue to pass it along to the next scene.
1 2 3 4 5 6 7 8 9
if(flag) { recordedAudio = RecordedAudio() recordedAudio.title = recorder.url.lastPathComponent recordedAudio.filePathUrl = recorder.url self.performSegueWithIdentifier("stopRecording", sender: recordedAudio) } else { stopButton.hidden = true recordButton.enabled = true }
Preparing For Segue
Now run the app and after stoping the recording the scene should change. Next we have to set the recordedAudio model in our play sounds view controller and use it for playback.
- Every time segue is performed from a controller a function performSegue is called which is used to prepare the destination controller for the segue. This is the function we will need to implement in our record sounds view controller in order to set the model for our play sound view controller. Start typing prepareForSegue and Xcode will help you by autocompleting the function for prepareForSegue in records sounds view controller. The function that will be produced given below for reference:
1
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {}
- Add the following code in this function. In the prepare segue we have access to a storyboard segue variable named segue which has a property identifier which is the same value we set when we defined the segue. It is benefecial to always check if the identifier is the same we have set because if we have multiple segues defined for our controller we can define different code for different segues. Next we get the destination controller by using destinationViewController property of our segue object. We get the model from sender as that is what we passed in when we called performSegue.
1 2 3 4
if(segue.identifier == "stopRecording") { let playSoundVC:PlaySoundsViewController = segue.destinationViewController as PlaySoundsViewController let data = sender as RecordedAudio }
- Next, we need to pass this model along to our Play Sounds View Controller but we don’t have right now any variable in our second scene which can accept the model. So go to PlaySoundsViewController.swift file and declare a global in our class beneath where you have defined audioPlayer by using following code.
1
var receivedAudio:RecordedAudio!
- Go back to records view controller and set the receivedAudio property of our playSoundViewController as following
1
playSoundVC.receivedAudio = data
- Now go to your PlaySoundsViewController and go to viewDidLoad function. Comment out all code in this file except the line super.viewDidLoad() and add the following code at the end of this function. This code use the file path from the recorded audio passed by the first scene. Now run the app and you should hear the recorded audio in your second scene when you click on fast or slow buttons.
1 2
audioPlayer = AVAudioPlayer(contentsOfURL: receivedAudio.filePathUrl, error: nil) audioPlayer.enableRate = true
This concludes this tutorial. In the next tutorial we will finish up the app by adding buttons on our second screen to play the recorded audio in chipmunk or darth vader voice. Also, we will see how to run the app on actual device if you want to.
Next and final part of the series is here
I finished everything even the darth vader and chipmunk because I skipped this problem but audioRecorderDidFinishRecording isn’t being called so the segue is never performed! Here is my code:
//
// RecordSoundsViewController.swift
// Pitch Perfect
//
// Created by Daniel Wang on 8/23/15.
// Copyright (c) 2015 Daniel Wang. All rights reserved.
//
import UIKit
import AVFoundation
class RecordSoundsViewController: UIViewController ,AVAudioRecorderDelegate{
var recordedAudio:RecordedAudio!
var audioRecorder:AVAudioRecorder!
@IBOutlet weak var recordButton: UIButton!
@IBOutlet weak var recordLabel: UILabel!
@IBOutlet weak var stopButton: UIButton!
@IBOutlet weak var recordingInProgress: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
override func viewWillAppear(animated: Bool) {
//hide the stop button
stopButton.hidden=true
recordButton.enabled=true
recordLabel.hidden=true
}
@IBAction func recordAudio(sender: UIButton) {
recordButton.enabled=false
stopButton.hidden=false
recordLabel.hidden=false
let dirPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as! String
let recordingName = “my_pitch_perfect_audio.wav”
let pathArray = [dirPath, recordingName]
let filePath = NSURL.fileURLWithPathComponents(pathArray)
println(filePath)
var session = AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
audioRecorder = AVAudioRecorder(URL: filePath, settings: nil, error: nil)
audioRecorder.meteringEnabled = true
audioRecorder.prepareToRecord()
audioRecorder.record()
}
func audioRecorderDidFinishRecording(recorder: AVAudioRecorder, successfully flag: Bool) {
recordedAudio=RecordedAudio()
recordedAudio.filePathUrl=recorder.url
recordedAudio.title=recorder.url.lastPathComponent
self.performSegueWithIdentifier(“stopRecording”, sender: recordedAudio)
}
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if(segue.identifier==”stopRecording”){
let playSoundsVC:PlaySoundsViewController=segue.destinationViewController as! PlaySoundsViewController
let data=sender as! RecordedAudio
playSoundsVC.receivedAudio=data
}
}
@IBAction func stopAudio(sender: UIButton) {
recordingInProgress.hidden=true
//Inside func stopAudio(sender: UIButton)
audioRecorder.stop()
var audioSession = AVAudioSession.sharedInstance()
audioSession.setActive(false, error: nil)
}
}