- AdTech
- Advertising
- Header Bidding
- MOBILE
Header Bidding Analytics: Key Metrics to Improve Ad Revenue
- Header Bidding
PubMonkey: Our Own Header Bidding Setup Tool
- Advertising
- App Monetization
- Header Bidding
- iOS
- iOS Ad Mediation
- MOBILE
We describe a patch to the Prebid Mobile SDKs (iOS and Android) which saves at least one round trip to the Prebid Cache Server every time the Primary Ad Server SDK WebView renders a winning ad creative coming from the Prebid Server. The patch allows for caching the winning creative locally on the client and loading it from a local minimalistic HTTP server running within the Prebid Mobile SDK; as opposed to making a round trip over the Internet to Prebid Cache Server. Implementation details are described. Ad creative loading time is measured and compared to that of the vanilla Prebid Mobile Integration – the loading time reduction is evaluated.
Mobile header bidding (or in-app bidding) is an important ad monetization technique, allowing publishers to increase their revenue. Prebid.org project is an open source initiative developed by several AdTech industry leaders. It provides an open source Prebid Server and Prebid Mobile SDKs. The most common scenario of Prebid Mobile integration is where it is used in conjunction with a primary ad server, which supplies the ad source waterfall. Prebid supplies a winning bid to participate in the primary ad server waterfall just like an ordinary ad source. This is a hybrid approach to marry Prebid (a true parallel mediation) with a legacy waterfall model.
Here is the data flow step-by-step:
From looking at the data flow – there is a potential for improvement. It looks like on step 2 we are receiving the winning creative and it is already on the client, within the Prebid SDK. However on step 7 the WebView within Primary Ad Server has to request it again from the cache server. If there was a way to pass the winning creative from Prebid SDK into the WebView within Primary Ad Server SDK directly we would not have to request the winning creative again on step 7. The main question is how?
In this article we’ll describe a way to pass the winning creative directly into the WebView within the Primary Ad Server SDK, and eliminate the extra request to the Prebid Cache Server. We have also measured time saved by rendering the creative directly without a roundtrip to the Prebid Cache Server. This article will show an iOS implementation in Swift. The corresponding version for Android in Kotlin is available in our open source fork of the Prebid Mobile SDKs. The primary ad server we will be using is MoPub.
The initial spike was implemented as a patch to the Prebid Mobile SDK and tested using the provided open source demo app.
We initially tried to cache the winning creative somewhere on the device, using bid id as a part of the file name, and then configure the creative in MoPub to fetch it like this:
<iframe src="file:///SomePathToApp/cache-32134232.html>
It turned out that modern mobile browsers and mobile WebViews do not allow to load local content when the page is loaded from the web. For that to work the page has to be also loaded from a local file.
The only other solution we saw was to use an in-process HTTP server which would run as part of the Prebid Mobile SDK, bound strictly to localhost to serve cached creatives, that would allow us to use MoPub creative similar to this one:
<iframe src="http://localhost:12643/32134232">
Eventually this laid the foundation for our final solution. Here is the patch scheme where the extra request is eliminated:
Luckily, Prebid SDK is written in Swift, so we were able to use a modern programming language without the need to dig into any legacy Objective-C code. As expected there were a few HTTP server implementations in Swift, but some of them are based on Swift.NIO and we didn’t want to add such a serious dependency to Prebid SDK in order to keep our solution footprint as small as possible. Some of the existing server implementations are overloaded with features that we didn’t need: WebSocket support, powerful routing features, and so on.
As we needed a really simplistic server, not intended for active usage, we decided to implement it ourselves — using OS sockets, setting speed and simplicity as main goals.
First, we created a higher level socket wrapper.
import Foundation
import Darwin.C
enum SocketError: Error {
case cantCreate(code: Int32)
case cantBind(code: Int32)
case cantListen(code: Int32)
var localizedDescription: String {
switch self {
case let .cantBind(code):
return "Can't bind socket: \(code)"
case let .cantCreate(code):
return "Can't create server socket: \(code)"
case let .cantListen(code):
return "Can't listen on socket: \(code)"
}
}
}
public class ServerSocket {
private let zero: Int8 = 0
private var sockAddr: sockaddr_in
private let cSocket: Int32
private let socklen: UInt8
var isRunning = false
init(port: UInt16) throws {
let htonsPort = (port << 8) + (port >> 8)
let sock_stream = SOCK_STREAM
cSocket = socket(AF_INET, Int32(sock_stream), 0)
guard self.cSocket > -1 else {
throw SocketError.cantCreate(code: Darwin.errno)
}
socklen = UInt8(socklen_t(MemoryLayout<sockaddr_in>.size))
sockAddr = sockaddr_in()
sockAddr.sin_family = sa_family_t(AF_INET)
sockAddr.sin_port = in_port_t(htonsPort)
// bind address to localhost only
sockAddr.sin_addr = in_addr(s_addr: UInt32(0x7f_00_00_01).bigEndian)
sockAddr.sin_zero = (zero, zero, zero, zero, zero, zero, zero, zero)
#if os(macOS)
sockAddr.sin_len = socklen
#endif
}
public func bindAndListen() throws {
try withUnsafePointer(to: &self.sockAddr) { sockaddrInPtr in
let sockaddrPtr = UnsafeRawPointer(sockaddrInPtr).assumingMemoryBound(to: sockaddr.self)
guard bind(self.cSocket, sockaddrPtr, socklen_t(self.socklen)) > -1 else {
throw SocketError.cantBind(code: Darwin.errno)
}
}
guard listen(self.cSocket, 5) > -1 else {
throw SocketError.cantListen(code: Darwin.errno)
}
isRunning = true
}
public func acceptClientConnection() -> ClientConnection {
return ClientConnection(sock: self.cSocket)
}
public func close() {
Darwin.close(cSocket)
isRunning = false
}
}
Please note that we bind our server to the localhost address to avoid any possibility of remote usage. Now we need a class to handle the client’s connections. It will also include methods to read/write data. |
public class ClientConnection {
private let clientSocket: Int32
private let bufferMax = 2048
private var readBuffer: [UInt8]
init(sock: Int32) {
var length = socklen_t(MemoryLayout<sockaddr_storage>.size)
let addr = UnsafeMutablePointer<sockaddr_storage>.allocate(capacity: 1)
let addrSockAddr = UnsafeMutablePointer<sockaddr>(OpaquePointer(addr))
readBuffer = Array(repeating: UInt8(0), count: bufferMax)
clientSocket = accept(sock, addrSockAddr, &length)
}
private func send(_ socket: Int32, _ output: String) {
_ = output.withCString { (bytes) in
Darwin.send(socket, bytes, Int(strlen(bytes)), 0)
}
}
func readRequest() -> String? {
let readBufferPointer = UnsafeMutablePointer<CChar>(OpaquePointer(readBuffer))!
Darwin.read(clientSocket, readBufferPointer, bufferMax)
if let httpRequest = String(validatingUTF8: readBufferPointer) {
return httpRequest
}
return nil
}
func respond(withHeaders: String, andContent: String = "") {
let response = withHeaders + "\r\n\r\n" + andContent
send(clientSocket, response)
close()
}
func close() {
Darwin.close(clientSocket)
}
}
These classes are designed to be used with our own server, so encapsulation would probably be better, but we postpone refactoring to a later stage.
Now we need a few additional classes for parsing requests and forming a response. They also were made as simple as possible, and without attempts to be a universal server solution.
struct ServerRequest {
let method: String
let parameters: String
init?(rawRequest: String?) {
// split first line of request
guard let request = rawRequest, let verb = request.split(separator: "\r\n").first else {
return nil
}
// get command and parameters
let splittedVerb = verb.split(separator: " ")
guard splittedVerb.count == 3, splittedVerb[1].count>0 else {
return nil
}
method = String(splittedVerb[0])
parameters = String(splittedVerb[1])
}
}
To represent the response we used plain enum with associated values, implementing the desired protocol.
protocol ServerResponse {
var headers: [String] { get }
var body: String { get }
}
enum SimpleServerResponses: Equatable {
case ok(content: String)
case errorNotFound
case errorParsing
}
extension SimpleServerResponses: ServerResponse {
var headers: [String] {
let code: Int
switch self {
case .ok:
code = 200
case .errorNotFound:
code = 404
case .errorParsing:
code = 400
}
let respText = code == 200 ? "OK" : body
return ["HTTP/1.1 \(code) \(respText)",
"Access-Control-Allow-Origin: *",
"Server: Simple HTTP Server",
"Content-Length: \(body.count)"
]
}
var body: String {
switch self {
case let .ok(content):
return content
case .errorNotFound:
return "Not found"
case .errorParsing:
return "Bad request"
}
}
}
Now we’re ready to connect it all together to create a server.
import Foundation
import Dispatch
class SimpleServer {
private let serverSocket: ServerSocket?
private let workQueue = DispatchQueue(label: "simple.http.server.worker", qos: .userInteractive, attributes: .concurrent)
private(set) var started = false
private let respCache = ResponseCache()
private let handler: ServerResponseHandler
init(port: UInt16, handler: ServerResponseHandler) {
self.handler = handler
do {
serverSocket = try ServerSocket(port: port)
} catch {
Log.error("Error creating socket: \(error)")
serverSocket = nil
}
}
deinit {
serverSocket?.close()
}
func start() -> Bool {
guard !started else {
return true
}
guard let socket = serverSocket else {
return false
}
do {
try socket.bindAndListen()
} catch {
Log.error("Error binding server: \(error)")
return false
}
started = true
workQueue.async { [weak self] in
guard let strongSelf = self else {
return
}
repeat {
let client = socket.acceptClientConnection()
defer {
client.close()
}
if let parsedRequest = ServerRequest(rawRequest: client.readRequest()) {
let response = strongSelf.handler.respond(toRequest: parsedRequest)
client.respond(withHeaders: response.headers.joined(separator: "\n"), andContent: response.body)
}
} while socket.isRunning
}
return true
}
}
As you can see, despite attempts to make everything as simple as possible, we made the request handler pluggable. There are some basic code quality standards that can’t be neglected in any circumstances. Our handler is simple.
protocol ServerResponseHandler {
func respond(toRequest: ServerRequest?) -> ServerResponse
}
class LocalPrebidCacheHandler: ServerResponseHandler {
private let respCache: ResponseCache
init(responseCache: ResponseCache) {
respCache = responseCache
}
func respond(toRequest: ServerRequest?) -> ServerResponse {
guard let request = toRequest, request.method.lowercased() == "get" else {
return SimpleServerResponses.errorParsing
}
guard request.parameters.starts(with: "/") else {
return SimpleServerResponses.errorParsing
}
let cacheKey = String(request.parameters.dropFirst())
if let cachedResponse = respCache.getResponse(forId: cacheKey) {
return SimpleServerResponses.ok(content: cachedResponse)
} else {
return SimpleServerResponses.errorNotFound
}
}
}
Now we needed only one class to glue these parts and to work as a “facade” for all cache mechanisms.
struct LocalCacheServer {
private let server: SimpleServer
private let cache: ResponseCache
private let handler: LocalPrebidCacheHandler
init(port: UInt16) {
cache = ResponseCache()
handler = LocalPrebidCacheHandler(responseCache: cache)
server = SimpleServer(port: port, handler: handler)
}
func start() -> Bool {
guard !server.started else {
return true
}
return server.start()
}
func cache(response: String, withId respId: String) {
cache.store(response: response, withId: respId)
}
}
I will skip the ResponseCache class, as it’s just a thread-safe wrapper around the regular dictionary.
Everything is in place and we’re ready to see how much time was saved. Our assumption was simple, we’re getting rid of one single call to the Prebid Cache Server and also the call to the CDN to load the Universal Creative, so we’re saving exactly the amount of time that these requests would take. It was really easy to test using the PostMan debug proxy. It appears that we’re saving from 70 to 250 ms on wired internet and from 120 to 400 ms on 4G. We did a series of tests in different countries and used different ISPs. Results varied from country to country, so in general, this solution effect depends on the quality of the internet connection. Localhost cache request took about 5 ms, so for the sake of simplicity, we can consider it instant. As you can see, speed gain can be considered as significant, especially for some cases when ad loading time is critical (for example, native ads shown in a UITableView cell that the user is scrolling).
Our next step was to create a new version of creative for the Google Ad Manager Server, it should try to use cached value, and in case of fail, perform fallback to regular Google Ad Manager flow. That was additional insurance: even if our local cache failed, the ad will be shown anyway. The spice must flow!
Here is the code of this creative, it’s really straightforward. As you can notice, we decided to use 16257 as a port to avoid possible conflicts as much as possible.
<script>
var ucTagData = {};
ucTagData.adServerDomain = "";
ucTagData.pubUrl = "%%PATTERN:url%%";
ucTagData.targetingMap = %%PATTERN:TARGETINGMAP%%;
ucTagData.hbPb = "%%PATTERN:hb_pb%%";
fetch('https://localhost:16257/%%PATTERN:hb_cache_id%%')
.then(response => response.text())
.then(cache => {
var iframe = document.createElement('iframe');
document.body.append(iframe);
iframe.style = "border-style: none; position: absolute; width:100%; height:100%;";
iframe.contentDocument.write(cache);
})
.catch(() => {
var script = document.createElement('script');
script.onload = () => {
try {
ucTag.renderAd(document, ucTagData);
} catch (e) {
console.log(e);
}
};
script.src = 'https://cdn.jsdelivr.net/npm/prebid-universal-creative@latest/dist/creative.js';
document.body.append(script);
});
</script>
We felt satisfied and were preparing the solution to be pushed to the public repo, but suddenly realized that this solution doesn’t work with Google Ads SDK. After a few days of debugging (it’s a really hard process that is really far from being straightforward), we found out that Google denied any non-TLS connections. Even though the app’s transport security policy explicitly allows non-HTTPS queries, Google Ads SDK overrides that on the WebView level and prohibits them.
So we had no other option but to implement a TLS server.
There are plenty of TLS server implementations: some are part of server-side Swift frameworks, others are based on a novelty SwiftNIO, but our goal was to keep the solution’s disk footprint size as low as possible, so we decided to implement TLS ourselves.
In iOS12 Apple introduced a framework called Network that seemed to be tailored for our needs. Unfortunately it’s not well documented and almost all tutorials and manuals we could find were focused on the client perspective. Since we need to move fast we took a less modern approach and leverage a time-proven solution: Security framework. Its documentation is better, and there are a few working solutions, so we were able to implement TLS pretty quickly.
First, we needed a wrapper around TLS session functions.
import Foundation
private func throwIfError(_ status: OSStatus) throws {
guard status == noErr else {
throw SimpleSocketError.sslError(from: status)
}
}
open class TlsSession {
/// Imports .p12 certificate file
///
/// See [SecPKCS12Import](https://developer.apple.com/documentation/security/1396915-secpkcs12import).
///
/// - Parameter _data: .p12 certificate file content
/// - Parameter password: password used when importing certificate
public static func loadP12Certificate(fromData data: Data, withPassword password: String) throws -> CFArray {
var items: CFArray?
let options = [kSecImportExportPassphrase: password]
try throwIfError(SecPKCS12Import(data as NSData, options as NSDictionary, &items))
let castedItems = (items! as [AnyObject])[0]
let secIdentity = castedItems[kSecImportItemIdentity] as! SecIdentity
let certChain = castedItems[kSecImportItemCertChain] as! [SecCertificate]
let certs = [secIdentity] + certChain.dropFirst().map { $0 as Any }
return certs as CFArray
}
private let context: SSLContext
private var connPtr = UnsafeMutablePointer<Int32>.allocate(capacity: 1)
init(connectionRef: Int32, certificate: CFArray) throws {
guard let newContext = SSLCreateContext(nil, .serverSide, .streamType) else {
throw SimpleSocketError.tlsSessionFailed("Can't create SSL context")
}
context = newContext
connPtr.pointee = connectionRef
try throwIfError(SSLSetIOFuncs(context, sslRead, sslWrite))
try throwIfError(SSLSetConnection(context, connPtr))
try throwIfError(SSLSetCertificate(context, certificate))
}
func close() {
SSLClose(context)
connPtr.deallocate()
}
func handshake() throws {
var status: OSStatus = -1
repeat {
status = SSLHandshake(context)
} while status == errSSLWouldBlock
try throwIfError(status)
}
/// Write up to `length` bytes to TLS session from a buffer `pointer` points to.
///
/// - Returns: The number of bytes written
/// - Throws: SocketError.tlsSessionFailed if unable to write to the session
func writeBuffer(_ pointer: UnsafeRawPointer, length: Int) throws -> Int {
var written = 0
try throwIfError(SSLWrite(context, pointer, length, &written))
return written
}
/// Read up to `length` bytes from TLS session into an existing buffer
///
/// - Parameter into: The buffer to read into (must be at least length bytes in size)
/// - Returns: The number of bytes read
/// - Throws: SocketError.tlsSessionFailed if unable to read from the session
func read(into buffer: UnsafeMutablePointer<UInt8>, length: Int) throws -> Int {
var received = 0
try throwIfError(SSLRead(context, buffer, length, &received))
return received
}
}
private func sslWrite(connection: SSLConnectionRef, data: UnsafeRawPointer,
dataLength: UnsafeMutablePointer<Int>) -> OSStatus {
let fPtr = connection.assumingMemoryBound(to: Int32.self).pointee
let bytesToWrite = dataLength.pointee
let written = Darwin.write(fPtr, data, bytesToWrite)
dataLength.pointee = written
if written > 0 {
return written < bytesToWrite ? errSSLWouldBlock : noErr
}
if written == 0 {
return errSSLClosedGraceful
}
dataLength.pointee = 0
return errno == EAGAIN ? errSSLWouldBlock : errSecIO
}
private func sslRead(connection: SSLConnectionRef, data: UnsafeMutableRawPointer,
dataLength: UnsafeMutablePointer<Int>) -> OSStatus {
let fPtr = connection.assumingMemoryBound(to: Int32.self).pointee
let bytesToRead = dataLength.pointee
let read = recv(fPtr, data, bytesToRead, 0)
dataLength.pointee = read
if read > 0 {
return read < bytesToRead ? errSSLWouldBlock : noErr
}
if read == 0 {
return errSSLClosedGraceful
}
dataLength.pointee = 0
switch errno {
case ENOENT:
return errSSLClosedGraceful
case EAGAIN:
return errSSLWouldBlock
case ECONNRESET:
return errSSLClosedAbort
default:
return errSecIO
}
}
We refactored our custom errors enum a little bit, to give it support for TLS exceptions. As you can notice, it partially reuses our existing structure.
enum SimpleSocketError: Error {
case cantCreate(code: Int32)
case cantBind(code: Int32)
case cantListen(code: Int32)
case tlsSessionFailed(_ message: String)
private func description(prefix: String, forCode code: Int32) -> String {
// https://forums.developer.apple.com/thread/113919
let reason = String(cString: strerror(code))
return "\(prefix): \(code). \(reason)"
}
static func sslError(from status: OSStatus) -> SimpleSocketError {
if #available(iOS 11.3, *) {
guard let msg = SecCopyErrorMessageString(status, nil) else {
return SimpleSocketError.tlsSessionFailed("<\(status): message is not provided>")
}
return SimpleSocketError.tlsSessionFailed(msg as NSString as String)
} else {
return SimpleSocketError.tlsSessionFailed("Some TLS error")
}
}
var localizedDescription: String {
switch self {
case let .cantBind(code):
return description(prefix: "Can't bind socket", forCode: code)
case let .cantCreate(code):
return description(prefix: "Can't create server socket", forCode: code)
case let .cantListen(code):
return description(prefix: "Can't listen on socket", forCode: code)
case let .tlsSessionFailed(message):
return "TLS Error: \(message)"
}
}
}
Then all we needed was to make a few minor tweaks in a ClientConnection class and addition of TLS support to SimpleServer. Changes were really minor, but I will post full updated versions of both classes.
class ClientConnection {
private let clientSocket: Int32
private let bufferMax = 2048
private var readBuffer: [UInt8]
private var tls: TlsSession?
init(sock: Int32) {
var length = socklen_t(MemoryLayout<sockaddr_storage>.size)
let addr = UnsafeMutablePointer<sockaddr_storage>.allocate(capacity: 1)
let addrSockAddr = UnsafeMutablePointer<sockaddr>(OpaquePointer(addr))
readBuffer = Array(repeating: UInt8(0), count: bufferMax)
clientSocket = accept(sock, addrSockAddr, &length)
}
private func send(_ socket: Int32, _ output: String) throws {
_ = try output.withCString { (bytes) in
let length = Int(strlen(bytes))
if let ssl = tls {
_ = try ssl.writeBuffer(bytes, length: length)
return
}
Darwin.send(socket, bytes, length, 0)
}
}
func startTlsSession(certificate: CFArray) {
do {
tls = try TlsSession(connectionRef: clientSocket, certificate: certificate)
try tls?.handshake()
} catch {
}
}
func readRequest() -> String? {
let readBufPtr = UnsafeMutableBufferPointer<UInt8>.allocate(capacity: bufferMax)
defer {
readBufPtr.deallocate()
}
guard let session = tls else {
Log.error("Can't get session")
return nil
}
guard let count = try? session.read(into: readBufPtr.baseAddress!, length: bufferMax) else {
return nil
}
let result = [UInt8](readBufPtr[0..<count])
return String(bytes: result, encoding: .utf8)
}
func writeResponse(_ string: String) {
guard let session = tls else {
Log.error("Can't get session")
return
}
let data = ArraySlice(string.utf8)
let length = data.count
do {
try data.withUnsafeBufferPointer { buffer in
guard let pointer = buffer.baseAddress else {
return
}
var sent = 0
while sent < length {
sent += try session.writeBuffer(pointer + sent, length: Int(length - sent))
}
}
} catch {
Log.error("Error writing response: \(error)")
}
}
func respond(withHeaders: String, andContent: String = "") {
let response = withHeaders + "\r\n\r\n" + andContent
do {
try send(clientSocket, response)
} catch {
Log.error("Error sending response \(error)")
}
close()
}
func close() {
tls?.close()
Darwin.close(clientSocket)
}
}
The difference is about 20 lines of code, and the vast majority of them aren’t caused by TLS itself, but by a slight re-write of working with unsafe buffers.
And here is a SimpleServer class, its changes are even smaller.
class SimpleServer {
private let serverSocket: ServerSocket?
private let workQueue = DispatchQueue(label: "simple.http.server.worker", qos: .userInteractive, attributes: .concurrent)
private(set) var started = false
private let respCache = ResponseCache()
private let handler: ServerResponseHandler
private let cert: CFArray?
init(port: UInt16, handler: ServerResponseHandler, certificates: CFArray?) {
self.handler = handler
cert = certificates
do {
serverSocket = try ServerSocket(port: port)
} catch {
Log.error("Error creating socket: \(error)")
serverSocket = nil
}
}
deinit {
serverSocket?.close()
}
func start() -> Bool {
guard let certificates = cert else {
return false
}
guard !started else {
return true
}
guard let socket = serverSocket else {
return false
}
do {
try socket.bindAndListen()
} catch {
Log.error("Error binding server: \(error)")
return false
}
started = true
Log.info("Local cache server started")
workQueue.async { [weak self] in
guard let strongSelf = self else {
return
}
repeat {
let client = socket.acceptClientConnection()
defer {
client.close()
}
client.startTlsSession(certificate: certificates)
if let parsedRequest = ServerRequest(rawRequest: client.readRequest()) {
let response = strongSelf.handler.respond(toRequest: parsedRequest)
let respData = response.headers.joined(separator: "\n") + "\r\n\r\n" + response.body
Log.info("Serving local response")
client.writeResponse(respData)
}
} while socket.isRunning
}
return true
}
}
So these were all the needed changes. Of course, there was yet one more problem that many developers are aware of: we needed an SSL certificate for the localhost. It’s pretty easy to create a self-signed certificate, but it requires a modification of trust root certificates on the device, which is obviously not an acceptable solution. So, we used a second, pretty popular approach, we bought a certificate for our company’s subdomain localhost.postindustria.com and pointed its A record to 127.0.0.1. After gluing everything together, the solution appeared to be working, and to be honest at first I could not believe it.
After the additional series of tests we’ve found out that TLS requests to local server took about 35-40 ms, so there is a small cut of the efficiency of our solution compared to almost instant “plain” requests (taking about 5 ms in average), but we’re saving at least 60-70 ms on the fastest connections, and in worst connectivity issues that are pretty common in the world of mobile data benefits are still really viable.
Although it seems that the patch to the Prebid SDK described above is only saving tiny amounts of loading time. If you apply that at scale for every ad call on many devices — we get some savings of the bandwidth, reduced latency and more optimal resources (mobile radios are brought up a tiny bit less often). Before declaring the implementation final — we are thoughtfully testing this solution as part of our own products and looking at metrics — perhaps there are more optimizations ahead!