和你一起終身學習,這裡是程式設計師Android
通過閱讀本文您將收穫以下知識點:
一、Android O上的Treble機制
二、Camera HAL3的框架更新
三、核心概念:Request
在 Android O 中,系統啟動時,會啟動一個 CameraProvider 服務,它是從 cameraserver 進程中分離出來,作為一個獨立進程 android.hardware.camera.provider@2.4-service 用來控制 camera HAL,cameraserver通過 HIDL 機制於camera provider進行通信。HIDL源自於 Android O 版本加入的 Treble 機制,它的主要功能是將 service 與 HAL 隔離,以方便 HAL 部分進行獨立升級,類似於 APP 與 Framework 之間的 Binder 通信機制,通過引入一個進程間通信機制而針對不同層級進行解耦(從 Local call 變成了 Remote call)。如下圖:
cameraserver 與 provider 這兩個進程啟動、初始化的調用邏輯,如下圖:
Application framework:
用於給APP提供訪問hardware的Camera API2,通過binder來訪問camera service。
AIDL:
基於Binder實現的一個用於讓App fw代碼訪問natice fw代碼的接口。其實現存在於下述路徑:frameworks/av/camera/aidl/android/hardware。其中:
(1) ICameraService
是相機服務的接口。用於請求連接、添加監聽等。
(2) ICameraDeviceUser
是已打開的特定相機設備的接口。應用框架可通過它訪問具體設備。
(3) ICameraServiceListener 和 ICameraDeviceCallbacks
分別是從 CameraService 和 CameraDevice 到應用框架的回調。
Natice framework:
frameworks/av/。提供了ICameraService、ICameraDeviceUser、ICameraDeviceCallbacks、ICameraServiceListener等aidl接口的實現。以及camera server的main函數。
Binder IPC interface:
提供進程間通信的接口,APP和CameraService的通信、CameraService和HAL的通信。其中,AIDL、HIDL都是基於Binder實現的。
Camera Service:
frameworks/av/services/camera/。同APP、HAL交互的服務,起到了承上啟下的作用。
HAL:
Google的HAL定義了可以讓Camera Service訪問的標準接口。對於供應商而言,必須要實現這些接口。
如下圖(紅色虛線是上行路線,黑色虛線則是下行路線):
從 Application 連接到 CameraService,這涉及到 Android 架構中的三個層次:App 層,Framework 層,Runtime 層。其中,App 層直接調用 Framework 層所封裝的方法,而 Framework 層需要通過 Binder 遠程調用 Runtime 中 CameraService 的函數。
這一部分主要的函數調用邏輯如下圖所示:
在 App 中,需要調用打開相機的API如下:
CameraCharacteristics:
描述攝像頭的各種特性,我們可以通過CameraManager的getCameraCharacteristics(@NonNull String cameraId)方法來獲取。
CameraDevice:
描述系統攝像頭,類似於早期的Camera。
CameraCaptureSession:
Session類,當需要拍照、預覽等功能時,需要先創建該類的實例,然後通過該實例裡的方法進行控制(例如:拍照 capture())。
CaptureRequest:
描述了一次操作請求,拍照、預覽等操作都需要先傳入CaptureRequest參數,具體的參數控制也是通過CameraRequest的成員變量來設置。
CaptureResult:
描述拍照完成後的結果。
例如打開camera的java代碼:
mCameraManager.openCamera(currentCameraId, stateCallback, backgroundHandler);
Camera2拍照流程如下所示:
/frameworks/base/core/java/android/hardware/camera2/CameraManager.java
最初的入口就是 CameraManager 的 openCamera 方法,但通過代碼可以看到,它僅僅是調用了 openCameraForUid 方法。
@RequiresPermission(android.Manifest.permission.CAMERA) public void openCamera(@NonNull String cameraId,
@NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler) throws CameraAccessException {
openCameraForUid(cameraId, callback, handler, USE_CALLING_UID);
}
下面的代碼忽略掉了一些參數檢查相關操作,最終主要調用了 openCameraDeviceUserAsync 方法。
public void openCameraForUid(@NonNull String cameraId,
@NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler, int clientUid) throws CameraAccessException { /* Do something in*/ . /* Do something out*/ openCameraDeviceUserAsync(cameraId, callback, handler, clientUid);
}
參考如下注釋分析:
private CameraDevice openCameraDeviceUserAsync(String cameraId,
CameraDevice.StateCallback callback, Handler handler, final int uid) throws CameraAccessException {
CameraCharacteristics characteristics = getCameraCharacteristics(cameraId);
CameraDevice device = null; synchronized (mLock) {
ICameraDeviceUser cameraUser = null;
android.hardware.camera2.impl.CameraDeviceImpl deviceImpl = //實例化一個 CameraDeviceImpl。構造時傳入了 CameraDevice.StateCallback 以及 Handler。
new android.hardware.camera2.impl.CameraDeviceImpl(
cameraId,
callback,
handler,
characteristics,
mContext.getApplicationInfo().targetSdkVersion);
ICameraDeviceCallbacks callbacks = deviceImpl.getCallbacks(); //獲取 CameraDeviceCallback 實例,這是提供給遠端連接到 CameraDeviceImpl 的接口。
try { if (supportsCamera2ApiLocked(cameraId)) { //HAL3 中走的是這一部分邏輯,主要是從 CameraManagerGlobal 中獲取 CameraService 的本地接口,通過它遠端調用(採用 Binder 機制) connectDevice 方法連接到相機設備。
//注意返回的 cameraUser 實際上指向的是遠端 CameraDeviceClient 的本地接口。// Use cameraservice's cameradeviceclient implementation for HAL3.2+ devices
ICameraService cameraService = CameraManagerGlobal.get().getCameraService(); if (cameraService == null) { throw new ServiceSpecificException(
ICameraService.ERROR_DISCONNECTED, "Camera service is currently unavailable");
}
cameraUser = cameraService.connectDevice(callbacks, cameraId,
mContext.getOpPackageName(), uid);
} else { // Use legacy camera implementation for HAL1 devices
int id; try {
id = Integer.parseInt(cameraId);
} catch (NumberFormatException e) { throw new IllegalArgumentException("Expected cameraId to be numeric, but it was: "
+ cameraId);
}
Log.i(TAG, "Using legacy camera HAL.");
cameraUser = CameraDeviceUserShim.connectBinderShim(callbacks, id);
}
} catch (ServiceSpecificException e) { /* Do something in */ . /* Do something out */ } // TODO: factor out callback to be non-nested, then move setter to constructor // For now, calling setRemoteDevice will fire initial // onOpened/onUnconfigured callbacks. // This function call may post onDisconnected and throw CAMERA_DISCONNECTED if // cameraUser dies during setup.
deviceImpl.setRemoteDevice(cameraUser); //將 CameraDeviceClient 設置到 CameraDeviceImpl 中進行管理。
device = deviceImpl;
} return device;
}
在繼續向下分析打開相機流程之前,先簡單看看調用到的 CameraDeviceImpl 中的setRemoteDevice 方法,主要是將獲取到的遠端設備保存起來:
/** * Set remote device, which triggers initial onOpened/onUnconfigured callbacks
*
* <p>This function may post onDisconnected and throw CAMERA_DISCONNECTED if remoteDevice dies
* during setup.</p>
* */
public void setRemoteDevice(ICameraDeviceUser remoteDevice) throws CameraAccessException { synchronized(mInterfaceLock) { // TODO: Move from decorator to direct binder-mediated exceptions // If setRemoteFailure already called, do nothing
if (mInError) return;
mRemoteDevice = new ICameraDeviceUserWrapper(remoteDevice); //通過 ICameraDeviceUserWrapper 給遠端設備實例加上一層封裝。
IBinder remoteDeviceBinder = remoteDevice.asBinder(); //使用 Binder 機制的一些基本設置。// For legacy camera device, remoteDevice is in the same process, and // asBinder returns NULL.
if (remoteDeviceBinder != null) { try {
remoteDeviceBinder.linkToDeath(this, /*flag*/ 0); //如果這個binder消失,為標誌信息註冊一個接收器。
} catch (RemoteException e) {
CameraDeviceImpl.this.mDeviceHandler.post(mCallOnDisconnected); throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED, "The camera device has encountered a serious error");
}
}
mDeviceHandler.post(mCallOnOpened); //需此處觸發 onOpened 與 onUnconfigured 這兩個回調,每個回調都是通過 mDeviceHandler 啟用一個新線程來調用的。
mDeviceHandler.post(mCallOnUnconfigured);
}
}
通過 Binder 機制,我們遠端調用了 connectDevice 方法(在 C++ 中稱為函數,但說成方法可能更順口一些),這個方法實現在 CameraService 類中。
2.2.4 CameraService/frameworks/av/services/camera/libcameraservice/CameraService.cpp
Status CameraService::connectDevice( const sp<hardware::camera2::ICameraDeviceCallbacks>& cameraCb, const String16& cameraId, const String16& clientPackageName, int clientUid, /*out*/ sp<hardware::camera2::ICameraDeviceUser>* device) {
ATRACE_CALL();
Status ret = Status::ok();
String8 id = String8(cameraId);
sp<CameraDeviceClient> client = nullptr; //此處調用的 connectHelper 方法才真正實現了連接邏輯(HAL1 時最終也調用到這個方法)。需要注意的是,設定的模板類型是 ICameraDeviceCallbacks 以及 CameraDeviceClient。
ret = connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>(cameraCb, id,
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName,
clientUid, USE_CALLING_PID, API_2, /*legacyMode*/ false, /*shimUpdateOnly*/ false, /*out*/client); if(!ret.isOk()) {
logRejected(id, getCallingPid(), String8(clientPackageName),
ret.toString8()); return ret;
} *device = client; //client 指向的類型是 CameraDeviceClient,其實例則是最終的返回結果。
return ret;
}
connectHelper 內容較多,忽略掉我們還無需關注的地方分析:
template<class CALLBACK, class CLIENT> Status CameraService::connectHelper(const sp<CALLBACK>& cameraCb, const String8& cameraId, int halVersion, const String16& clientPackageName, int clientUid, int clientPid,
apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly, /*out*/sp<CLIENT>& device) {
binder::Status ret = binder::Status::ok();
String8 clientName8(clientPackageName); /* Do something in */ . /* Do something out */ sp<BasicClient> tmp = nullptr; //調用 makeClient 生成 CameraDeviceClient 實例。
if(!(ret = makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,
clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel, /*out*/&tmp)).isOk()) { return ret;
} //初始化 CLIENT 實例。注意此處的模板類型 CLIENT 即是 CameraDeviceClient,傳入的參數 mCameraProviderManager 則是與 HAL service 有關。
client = static_cast<CLIENT*>(tmp.get());
LOG_ALWAYS_FATAL_IF(client.get() == nullptr, "%s: CameraService in invalid state",
__FUNCTION__);
err = client->initialize(mCameraProviderManager); /* Do something in */ . /* Do something out */
// Important: release the mutex here so the client can call back into the service from its // destructor (can be at the end of the call)
device = client; return ret;
}
makeClient 主要是根據 API 版本以及 HAL 版本來選擇生成具體的 Client 實例,Client 就沿著前面分析下來的路徑返回到 CameraDeviceImpl 實例中,被保存到 mRemoteDevice。
Status CameraService::makeClient(const sp<CameraService>& cameraService, const sp<IInterface>& cameraCb, const String16& packageName, const String8& cameraId, int facing, int clientPid, uid_t clientUid, int servicePid, bool legacyMode, int halVersion, int deviceVersion, apiLevel effectiveApiLevel, /*out*/sp<BasicClient>* client) { if (halVersion < 0 || halVersion == deviceVersion) { // Default path: HAL version is unspecified by caller, create CameraClient // based on device version reported by the HAL.
switch(deviceVersion) { case CAMERA_DEVICE_API_VERSION_1_0: /* Do something in */ . /* Do something out */
case CAMERA_DEVICE_API_VERSION_3_0: case CAMERA_DEVICE_API_VERSION_3_1: case CAMERA_DEVICE_API_VERSION_3_2: case CAMERA_DEVICE_API_VERSION_3_3: case CAMERA_DEVICE_API_VERSION_3_4: if (effectiveApiLevel == API_1) { // Camera1 API route
sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get()); *client = new Camera2Client(cameraService, tmp, packageName, cameraIdToInt(cameraId),
facing, clientPid, clientUid, servicePid, legacyMode);
} else { // Camera2 API route : 實例化了 CameraDeviceClient 類作為 Client(注意此處構造傳入了 ICameraDeviceCallbacks,這是連接到 CameraDeviceImpl 的遠端回調)
sp<hardware::camera2::ICameraDeviceCallbacks> tmp = static_cast<hardware::camera2::ICameraDeviceCallbacks*>(cameraCb.get()); *client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
facing, clientPid, clientUid, servicePid);
} break; default: // Should not be reachable
ALOGE("Unknown camera device HAL version: %d", deviceVersion); return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION, "Camera device \"%s\" has unknown HAL version %d",
cameraId.string(), deviceVersion);
}
} else { /* Do something in */ . /* Do something out */ } return Status::ok();
}
至此,打開相機流程中,從 App 到 CameraService 的調用邏輯基本上就算走完了。
簡圖總結:
Ps:
2.3 從 CameraService 到 HAL Service由於 Android O 中加入了 Treble 機制,CameraServer 一端主體為 CameraService,它將會尋找現存的 Provider service,將其加入到內部的 CameraProviderManager 中進行管理,相關操作都是通過遠端調用進行的。
而 Provider service 一端的主體為 CameraProvider,它在初始化時就已經連接到 libhardware 的 Camera HAL 實現層,並以 CameraModule 來進行管理。
進程的啟動後,連路的 「載體」 就搭建完成了(需要注意,此時 QCamera3HWI 還未創建),可用下圖簡單表示:
而在打開相機時,該層的完整連路會被創建出來,主要調用邏輯如下圖:
上回講到,在 CameraService::makeClient 中,實例化了一個 CameraDeviceClient。現在我們就從它的構造函數開始,繼續探索打開相機的流程。
這一部分主要活動在 Runtime 層,這裡分成 CameraService 與 HAL Service 兩側來分析。
frameworks\av\services\camera\libcameraservice\api2\CameraDeviceClient.cpp
CameraDeviceClient::CameraDeviceClient(const sp<CameraService>& cameraService, const sp<hardware::camera2::ICameraDeviceCallbacks>& remoteCallback, const String16& clientPackageName, const String8& cameraId, int cameraFacing, int clientPid,
uid_t clientUid, int servicePid) :
Camera2ClientBase(cameraService, remoteCallback, clientPackageName,
cameraId, cameraFacing, clientPid, clientUid, servicePid), //繼承它的父類 Camera2ClientBase
mInputStream(),
mStreamingRequestId(REQUEST_ID_NONE),
mRequestIdCounter(0),
mPrivilegedClient(false) { char value[PROPERTY_VALUE_MAX];
property_get("persist.camera.privapp.list", value, "");
String16 packagelist(value); if (packagelist.contains(clientPackageName.string())) {
mPrivilegedClient = true;
}
ATRACE_CALL();
ALOGI("CameraDeviceClient %s: Opened", cameraId.string());
}
CameraService 在創建 CameraDeviceClient 之後,會調用它的初始化函數:
//對外提供調用的初始化函數接口 initialize。
status_t CameraDeviceClient::initialize(sp<CameraProviderManager> manager) { return initializeImpl(manager);
} //初始化的具體實現函數,模板 TProviderPtr 在此處即是 CameraProviderManager 類。
template<typename TProviderPtr>
//首先將父類初始化,注意此處傳入了 CameraProviderManager。
status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr) {
ATRACE_CALL();
status_t res;
res = Camera2ClientBase::initialize(providerPtr); if (res != OK) { return res;
} //這裡是關於 FrameProcessor 的創建與初始化配置等等
String8 threadName;
mFrameProcessor = new FrameProcessorBase(mDevice);
threadName = String8::format("CDU-%s-FrameProc", mCameraIdStr.string());
mFrameProcessor->run(threadName.string());
mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
FRAME_PROCESSOR_LISTENER_MAX_ID, /*listener*/this, /*sendPartials*/true); return OK;
}
frameworks\av\services\camera\libcameraservice\common\Camera2ClientBase.cpp**
template <typename TClientBase> //模板 TClientBase,在 CameraDeviceClient 繼承 Camera2ClientBase 時被指定為 CameraDeviceClientBase。
Camera2ClientBase<TClientBase>::Camera2ClientBase( //構造的相關參數,以及初始化列表,這裡面需要注意 TCamCallbacks 在 CameraDeviceClientBase 中被指定為了 ICameraDeviceCallbacks。
const sp<CameraService>& cameraService, const sp<TCamCallbacks>& remoteCallback, const String16& clientPackageName, const String8& cameraId, int cameraFacing, int clientPid,
uid_t clientUid, int servicePid):
TClientBase(cameraService, remoteCallback, clientPackageName,
cameraId, cameraFacing, clientPid, clientUid, servicePid),
mSharedCameraCallbacks(remoteCallback),
mDeviceVersion(cameraService->getDeviceVersion(TClientBase::mCameraIdStr)),
mDeviceActive(false)
{
ALOGI("Camera %s: Opened. Client: %s (PID %d, UID %d)", cameraId.string(),
String8(clientPackageName).string(), clientPid, clientUid);
mInitialClientPid = clientPid;
mDevice = new Camera3Device(cameraId); //創建了一個 Camera3Device。
LOG_ALWAYS_FATAL_IF(mDevice == 0, "Device should never be NULL here.");
}
回去再看看初始化函數:
template <typename TClientBase> //初始化函數接口,真正的實現部分在 initializeImpl 中。
status_t Camera2ClientBase<TClientBase>::initialize(sp<CameraProviderManager> manager) { return initializeImpl(manager);
} //TClientBase 對應 CameraDeviceClientBase,而 TProviderPtr 對應的是 CameraProviderManager。
template <typename TClientBase> template <typename TProviderPtr> status_t Camera2ClientBase<TClientBase>::initializeImpl(TProviderPtr providerPtr) {
ATRACE_CALL();
ALOGV("%s: Initializing client for camera %s", __FUNCTION__,
TClientBase::mCameraIdStr.string());
status_t res; // Verify ops permissions
res = TClientBase::startCameraOps(); //調用 CameraDeviceClientBase 的 startCameraOps 方法,檢查 ops 的權限。
if (res != OK) { return res;
} if (mDevice == NULL) {
ALOGE("%s: Camera %s: No device connected",
__FUNCTION__, TClientBase::mCameraIdStr.string()); return NO_INIT;
}
res = mDevice->initialize(providerPtr); //初始化 Camera3Device 的實例,注意此處傳入了 CameraProviderManager。
if (res != OK) {
ALOGE("%s: Camera %s: unable to initialize device: %s (%d)",
__FUNCTION__, TClientBase::mCameraIdStr.string(), strerror(-res), res); return res;
} //在 Camera3Device 實例中設置 Notify 回調。
wp<CameraDeviceBase::NotificationListener> weakThis(this);
res = mDevice->setNotifyCallback(weakThis); return OK;
}
frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp**
Camera3Device::Camera3Device(const String8 &id):
mId(id),
mOperatingMode(NO_MODE),
mIsConstrainedHighSpeedConfiguration(false),
mStatus(STATUS_UNINITIALIZED),
mStatusWaiters(0),
mUsePartialResult(false),
mNumPartialResults(1),
mTimestampOffset(0),
mNextResultFrameNumber(0),
mNextReprocessResultFrameNumber(0),
mNextShutterFrameNumber(0),
mNextReprocessShutterFrameNumber(0),
mListener(NULL),
mVendorTagId(CAMERA_METADATA_INVALID_VENDOR_ID)
{
ATRACE_CALL();
//在這個觀察構造函數中設定了兩個回調接口:
camera3_callback_ops::notify = &sNotify;
camera3_callback_ops::process_capture_result = &sProcessCaptureResult;
ALOGV("%s: Created device for camera %s", __FUNCTION__, mId.string());
}
其初始化函數篇幅較長,這裡省略掉了關於 RequestMetadataQueue 的相關操作。
status_t Camera3Device::initialize(sp<CameraProviderManager> manager) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);
ALOGV("%s: Initializing HIDL device for camera %s", __FUNCTION__, mId.string()); if (mStatus != STATUS_UNINITIALIZED) {
CLOGE("Already initialized!"); return INVALID_OPERATION;
} if (manager == nullptr) return INVALID_OPERATION;
sp<ICameraDeviceSession> session;
ATRACE_BEGIN("CameraHal::openSession");
status_t res = manager->openSession(mId.string(), this, //調用CameraProviderManager的openSession方法,開啟了遠端的**Session**
/*out*/ &session);
ATRACE_END(); if (res != OK) {
SET_ERR_L("Could not open camera session: %s (%d)", strerror(-res), res); return res;
} /* Do something in */ . /* Do something out */
return initializeCommonLocked();
}
frameworks\av\services\camera\libcameraservice\common\CameraProviderManager.cpp
status_t CameraProviderManager::openSession(const std::string &id, const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback, /*out*/ sp<hardware::camera::device::V3_2::ICameraDeviceSession> *session) {
std::lock_guard<std::mutex> lock(mInterfaceMutex);
auto deviceInfo = findDeviceInfoLocked(id, //首先調用 findDeviceInfoLocked,獲取 HAL3 相關的 DeviceInfo3
/*minVersion*/ {3,0}, /*maxVersion*/ {4,0}); if (deviceInfo == nullptr) return NAME_NOT_FOUND;
auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo);
Status status;
hardware::Return<void> ret; //通過遠端調用 CameraDevice 的 open 方法,創建 CameraDeviceSession 實例並將其本地調用接口通過入參 session 返回。
ret = deviceInfo3->mInterface->open(callback, [&status, &session]
(Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) {
status = s; if (status == Status::OK) { *session = cameraSession;
}
}); if (!ret.isOk()) {
ALOGE("%s: Transaction error opening a session for camera device %s: %s",
__FUNCTION__, id.c_str(), ret.description().c_str()); return DEAD_OBJECT;
} return mapToStatusT(status);
}
hardware\interfaces\camera\device\3.2\default\CameraDevice.cpp
CameraDevice 的實例實際上在初始化 HAL Service 之後就存在了。前面說到,通過 CameraProviderManager 中的 deviceInfo 接口,調用遠端 CameraDevice 實例的 open 方法,下面就來看看它的代碼實現:
Return<void> CameraDevice::open(const sp<ICameraDeviceCallback>& callback, open_cb _hidl_cb) {
Status status = initStatus();
sp<CameraDeviceSession> session = nullptr; if (callback == nullptr) {
ALOGE("%s: cannot open camera %s. callback is null!",
__FUNCTION__, mCameraId.c_str());
_hidl_cb(Status::ILLEGAL_ARGUMENT, nullptr); return Void();
} if (status != Status::OK) { /* Do something in */ . /* Do something out */ } else {
mLock.lock(); /* Do something in */ . /* Do something out */
/** Open HAL device */ status_t res;
camera3_device_t *device;
ATRACE_BEGIN("camera3->open");
res = mModule->open(mCameraId.c_str(), //注意 mModule 是在 HAL Service 初始化時就已經配置好的,它對從libhardware庫中加載的 Camera HAL 接口進行了一層封裝,從這裡往下就會一路走到 QCamera3HWI 的構造流程去。
reinterpret_cast<hw_device_t**>(&device));
ATRACE_END(); /* Do something in */ . /* Do something out */
//創建 session 並讓內部成員 mSession 持有,具體實現的函數為 creatSession。
session = createSession(
device, info.static_camera_characteristics, callback); /* Do something in */ . /* Do something out */ mSession = session;
IF_ALOGV() {
session->getInterface()->interfaceChain([](
::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) {
ALOGV("Session interface chain:"); for (auto iface : interfaceChain) {
ALOGV(" %s", iface.c_str());
}
});
}
mLock.unlock();
}
_hidl_cb(status, session->getInterface()); return Void();
}
而 createSession 中直接創建了一個 CameraDeviceSession。當然在其構造函數中會調用內部的初始化函數,然後會進入 HAL 接口層 QCamera3HWI 的初始化流程,至此,從 CameraService 到 HAL Service 這一部分的打開相機流程就基本走通了。
簡圖總結:
在 HAL3 中,Camera HAL 的接口轉化層(以及流解析層)由 QCamera3HardwareInterface 擔當,而接口層與實現層與 HAL1 中基本沒什麼差別,都是在 mm_camera_interface.c 與 mm_camera.c 中。
那麼接口轉化層的實例是何時創建的,又是怎麼初始化的,創建它的時候,與接口層、實現層又有什麼交互?通過下圖展示的主要調用流程:
hardware\interfaces\camera\common\1.0\default\CameraModule.cpp
上回說到,CameraDevice::open 的實現中,調用了 mModule->open,即 CameraModule::open,通過代碼來看,它做的事並不多,主要是調用 mModule->common.methods->open,來進入下一層級的流程。
而這裡則需要注意了,open 是一個函數指針,它指向的是 QCamera2Factory 的 camera_device_open 方法,至於為什麼和 QCamera2Factory 有關,這就要回頭看 HAL Service 的啟動初始化流程了。
int CameraModule::open(const char* id, struct hw_device_t** device) { int res;
ATRACE_BEGIN("camera_module->open");
res = filterOpenErrorCode(mModule->common.methods->open(&mModule->common, id, device));
ATRACE_END(); return res;
}
/*===========================================================================
* FUNCTION : camera_device_open
*
* DESCRIPTION: static function to open a camera device by its ID
*
* PARAMETERS :
* @camera_id : camera ID
* @hw_device : ptr to struct storing camera hardware device info
*
* RETURN : int32_t type of status
* NO_ERROR -- success
* none-zero failure code
*==========================================================================*/
int QCamera2Factory::camera_device_open( const struct hw_module_t *module, const char *id,
struct hw_device_t **hw_device)
{ /* Do something in */ . /* Do something out */ #ifdef QCAMERA_HAL1_SUPPORT //注意到這裡通過宏定義添加了對 HAL1 的兼容操作。實際上是要調用 cameraDeviceOpen 來進行下一步操作。
if(gQCameraMuxer)
rc = gQCameraMuxer->camera_device_open(module, id, hw_device); else #endif
rc = gQCamera2Factory->cameraDeviceOpen(atoi(id), hw_device); return rc;
}
struct hw_module_methods_t QCamera2Factory::mModuleMethods = {
.open = QCamera2Factory::camera_device_open, //這裡就將前面所說的 open 函數指針指定為了 camera_device_open 這個方法。
};
cameraDeviceOpen 的工作:
/*===========================================================================
* FUNCTION : cameraDeviceOpen
*
* DESCRIPTION: open a camera device with its ID
*
* PARAMETERS :
* @camera_id : camera ID
* @hw_device : ptr to struct storing camera hardware device info
*
* RETURN : int32_t type of status
* NO_ERROR -- success
* none-zero failure code
*==========================================================================*/
int QCamera2Factory::cameraDeviceOpen(int camera_id,
struct hw_device_t **hw_device)
{ /* Do something in */ . /* Do something out */
if ( mHalDescriptors[camera_id].device_version == CAMERA_DEVICE_API_VERSION_3_0 ) {
QCamera3HardwareInterface *hw = new QCamera3HardwareInterface(mHalDescriptors[camera_id].cameraId, //首先創建了 QCamera3HardwareInterface 的實例。
mCallbacks); if (!hw) {
LOGE("Allocation of hardware interface failed"); return NO_MEMORY;
}
rc = hw->openCamera(hw_device); //調用實例的 openCamera 方法。
if (rc != 0) {
delete hw;
}
} /* Do something in */ . /* Do something out */
return rc;
}
hardware\qcom\camera\qcamera2\hal3\QCamera3HWI.cpp
首先需要注意的是內部成員 mCameraOps 的定義。在構造實例時,有 mCameraDevice.ops = &mCameraOps;(關鍵點)
camera3_device_ops_t QCamera3HardwareInterface::mCameraOps = {
.initialize = QCamera3HardwareInterface::initialize,
.configure_streams = QCamera3HardwareInterface::configure_streams,
.register_stream_buffers = NULL,
.construct_default_request_settings = QCamera3HardwareInterface::construct_default_request_settings,
.process_capture_request = QCamera3HardwareInterface::process_capture_request,
.get_metadata_vendor_tag_ops = NULL,
.dump = QCamera3HardwareInterface::dump,
.flush = QCamera3HardwareInterface::flush,
.reserved = {0},
};
再來繼續看看 openCamera 實現:
int QCamera3HardwareInterface::openCamera(struct hw_device_t **hw_device)
{ /* Do something in */ . /* Do something out */ rc = openCamera(); //調用另一個 openCamera 方法,這是具體實現的部分。
if (rc == 0) { *hw_device = &mCameraDevice.common; //打開相機成功後,將設備結構中的 common 部分通過雙重指針 hw_device 返回。
} else
*hw_device = NULL; /* Do something in */ . /* Do something out */
return rc;
} int QCamera3HardwareInterface::openCamera()
{ /* Do something in */ . /* Do something out */ rc = camera_open((uint8_t)mCameraId, &mCameraHandle); //這裡就開始進入接口層了,調用的是接口層中的 camera_open 接口。注意此處獲取到了 mCameraHandle.
/* Do something in */ . /* Do something out */ rc = mCameraHandle->ops->register_event_notify(mCameraHandle->camera_handle, //注意這裡傳入了一個 camEvtHandle
camEvtHandle, (void *)this); /* Do something in */ . /* Do something out */ rc = mCameraHandle->ops->get_session_id(mCameraHandle->camera_handle,
&sessionId[mCameraId]); /* Do something in */ . /* Do something out */
return NO_ERROR;
}
上面是接口轉化層中,關於 openCamera 的部分,下面繼續看看它的初始化函數。在前面已經分析過,創建 CameraDeviceSession 實例時,會調用它內部的初始化方法,而這其中包含了調用 QCamera3HWI 的初始化方法 initialize
int QCamera3HardwareInterface::initialize(const struct camera3_device *device, const camera3_callback_ops_t *callback_ops)
{
LOGD("E");
QCamera3HardwareInterface *hw = reinterpret_cast<QCamera3HardwareInterface *>(device->priv); if (!hw) {
LOGE("NULL camera device"); return -ENODEV;
} int rc = hw->initialize(callback_ops); //調用了真正實現的初始化邏輯的函數
LOGD("X"); return rc;
} int QCamera3HardwareInterface::initialize( const struct camera3_callback_ops *callback_ops)
{
ATRACE_CALL(); int rc;
LOGI("E :mCameraId = %d mState = %d", mCameraId, mState);
pthread_mutex_lock(&mMutex); // Validate current state
switch (mState) { case OPENED: /* valid state */
break; default:
LOGE("Invalid state %d", mState);
rc = -ENODEV; goto err1;
}
rc = initParameters(); //參數(mParameters)初始化,注意這裡的參數和 CameraParameter 是不同的,它是 metadata_buffer 相關參數的結構。
if (rc < 0) {
LOGE("initParamters failed %d", rc); goto err1;
}
mCallbackOps = callback_ops; //這裡將 camera3_call_back_ops 與 mCallbackOps 關聯了起來。
mChannelHandle = mCameraHandle->ops->add_channel( //獲取 mChannelHandle 這一句柄,調用的方法實際是 mm_camera_interface.c 中的 mm_camera_intf_add_channel。
mCameraHandle->camera_handle, NULL, NULL, this); if (mChannelHandle == 0) {
LOGE("add_channel failed");
rc = -ENOMEM;
pthread_mutex_unlock(&mMutex); return rc;
}
pthread_mutex_unlock(&mMutex);
mCameraInitialized = true;
mState = INITIALIZED;
LOGI("X"); return 0;
err1:
pthread_mutex_unlock(&mMutex); return rc;
}
hardware\qcom\camera\qcamera2\stack\mm-camera-interface\src\mm_camera_interface.c
camera_open 中幹的事也不多,省略掉了關於為 cam_obj 分配內存以及初始化的部分。實際上是調用實現層中的 mm_camera_open來真正實現打開相機設備的操作,設備的各種信息都填充到 cam_obj 結構中。
int32_t camera_open(uint8_t camera_idx, mm_camera_vtbl_t **camera_vtbl)
{
int32_t rc = 0;
mm_camera_obj_t *cam_obj = NULL; /* Do something in */ . /* Do something out */
rc = mm_camera_open(cam_obj); /* Do something in */ . /* Do something out */
}
而關於初始化時調用的 mm_camera_intf_add_channel 代碼如下:
static uint32_t mm_camera_intf_add_channel(uint32_t camera_handle,
mm_camera_channel_attr_t *attr,
mm_camera_buf_notify_t channel_cb, void *userdata)
{
uint32_t ch_id = 0;
mm_camera_obj_t * my_obj = NULL;
LOGD("E camera_handler = %d", camera_handle);
pthread_mutex_lock(&g_intf_lock);
my_obj = mm_camera_util_get_camera_by_handler(camera_handle); if(my_obj) {
pthread_mutex_lock(&my_obj->cam_lock);
pthread_mutex_unlock(&g_intf_lock);
ch_id = mm_camera_add_channel(my_obj, attr, channel_cb, userdata); //通過調用實現層的 mm_camera_add_channel 來獲取一個 channel id,也就是其句柄。
} else {
pthread_mutex_unlock(&g_intf_lock);
}
LOGD("X ch_id = %d", ch_id); return ch_id;
}
hardware\qcom\camera\qcamera2\stack\mm-camera-interface\src\mm_camera.c
終於來到最底層的實現了,mm_camera_open 主要工作是填充 my_obj,並且啟動、初始化一些線程相關的東西,關於線程的部分我這裡就省略掉了。
int32_t mm_camera_open(mm_camera_obj_t *my_obj)
{ char dev_name[MM_CAMERA_DEV_NAME_LEN];
int32_t rc = 0;
int8_t n_try=MM_CAMERA_DEV_OPEN_TRIES;
uint8_t sleep_msec=MM_CAMERA_DEV_OPEN_RETRY_SLEEP; int cam_idx = 0; const char *dev_name_value = NULL; int l_errno = 0;
pthread_condattr_t cond_attr;
LOGD("begin\n"); if (NULL == my_obj) { goto on_error;
}
dev_name_value = mm_camera_util_get_dev_name(my_obj->my_hdl); //此處調用的函數是為了獲取 my_obj 的句柄,這裡不深入分析。
if (NULL == dev_name_value) { goto on_error;
}
snprintf(dev_name, sizeof(dev_name), "/dev/%s",
dev_name_value);
sscanf(dev_name, "/dev/video%d", &cam_idx);
LOGD("dev name = %s, cam_idx = %d", dev_name, cam_idx); do{
n_try--;
errno = 0;
my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK); //讀取設備文件的文件描述符,存到 my_obj->ctrl_fd 中。
l_errno = errno;
LOGD("ctrl_fd = %d, errno == %d", my_obj->ctrl_fd, l_errno); if((my_obj->ctrl_fd >= 0) || (errno != EIO && errno != ETIMEDOUT) || (n_try <= 0 )) { break;
}
LOGE("Failed with %s error, retrying after %d milli-seconds",
strerror(errno), sleep_msec);
usleep(sleep_msec * 1000U);
}while (n_try > 0); if (my_obj->ctrl_fd < 0) {
LOGE("cannot open control fd of '%s' (%s)\n",
dev_name, strerror(l_errno)); if (l_errno == EBUSY)
rc = -EUSERS; else rc = -1; goto on_error;
} else {
mm_camera_get_session_id(my_obj, &my_obj->sessionid); //成功獲取到文件描述符後,就要獲取 session 的 id 了。
LOGH("Camera Opened id = %d sessionid = %d", cam_idx, my_obj->sessionid);
} /* Do something in */ . /* Do something out */
/* unlock cam_lock, we need release global intf_lock in camera_open(),
* in order not block operation of other Camera in dual camera use case.*/ pthread_mutex_unlock(&my_obj->cam_lock); return rc;
}
初始化的相關部分,mm_camera_add_channel 代碼如下:
uint32_t mm_camera_add_channel(mm_camera_obj_t *my_obj,
mm_camera_channel_attr_t *attr,
mm_camera_buf_notify_t channel_cb, void *userdata)
{
mm_channel_t *ch_obj = NULL;
uint8_t ch_idx = 0;
uint32_t ch_hdl = 0; //從現有的 Channel 中找到第一個狀態為 NOTUSED 的,獲取到 ch_obj 中
for(ch_idx = 0; ch_idx < MM_CAMERA_CHANNEL_MAX; ch_idx++) { if (MM_CHANNEL_STATE_NOTUSED == my_obj->ch[ch_idx].state) {
ch_obj = &my_obj->ch[ch_idx]; break;
}
}
/*初始化 ch_obj 結構。首先調用 mm_camera_util_generate_handler 為其生成一個句柄(也是該函數的返回值),
*然後將狀態設置為 STOPPED,注意這裡還保存了 my_obj 的指針及其 session id,最後調用 mm_channel_init 完成了 Channel 的初始化。*/
if (NULL != ch_obj) { /* initialize channel obj */ memset(ch_obj, 0, sizeof(mm_channel_t));
ch_hdl = mm_camera_util_generate_handler(ch_idx);
ch_obj->my_hdl = ch_hdl;
ch_obj->state = MM_CHANNEL_STATE_STOPPED;
ch_obj->cam_obj = my_obj;
pthread_mutex_init(&ch_obj->ch_lock, NULL);
ch_obj->sessionid = my_obj->sessionid;
mm_channel_init(ch_obj, attr, channel_cb, userdata);
}
pthread_mutex_unlock(&my_obj->cam_lock); return ch_hdl;
}
簡圖總結:
總而言之,上面這一頓操作下來後,相機從上到下的整個連路就已經打通,接下來應該只要 APP 按照流程下發 Preview 的 Request 就可以開始獲取預覽數據了。
三、核心概念:Requestrequest是貫穿camera2數據處理流程最為重要的概念,應用框架是通過向camera子系統發送request來獲取其想要的result。
request有下述幾個重要特徵:
一個request可以對應一系列的result
request應當包含所有必要的配置信息,存放於metadata中。如:解析度和像素格式;sensor、鏡頭、閃光等的控制信息;3A 操作模式;RAW 到 YUV 處理控制項;以及統計信息的生成等。
request需要攜帶對應的surface(也就是框架裡面的stream),用於接收返回的圖像。
多個request可以同時處於in-flight狀態,並且submit request是non-blocking方式的。也就是說,上一個request沒有處理完,也可以submit新的request。
隊列中request的處理總是按照FIFO的形式。
snapshot的request比preview的request擁有更高的優先級。
3.1request的整體處理流程圖open 流程(黑色箭頭線條)
CameraManager註冊AvailabilityCallback回調,用於接收相機設備的可用性狀態變更的通知。
CameraManager通過調用getCameraIdList()獲取到當前可用的camera id,通過getCameraCharacteristcs()函數獲取到指定相機設備的特性。
CameraManager調用openCamera()打開指定相機設備,並返回一個CameraDevice對象,後續通過該CameraDevice對象操控具體的相機設備。
使用CameraDevice對象的createCaptureSession()創建一個session,數據請求(預覽、拍照等)都是通過session進行。在創建session時,需要提供Surface作為參數,用於接收返回的圖像。
configure stream流程(藍色箭頭線條)
申請Surface,如上圖的OUTPUT STREAMS DESTINATIONS框,用於在創建session時作為參數,接收session返回的圖像。
創建session後,surface會被配置成框架的stream。在框架中,stream定義了圖像的size及format。
每個request都需要攜帶target surface用於指定返回的圖像是歸屬到哪個被configure的stream的。
request處理流程(橙色箭頭線條)
CameraDevice對象通過createCaptureRequest()來創建request,每個reqeust都需要有surface和settings(settings就是metadata,request包含的所有配置信息都是放在metadata中的)。
使用session的capture()、captureBurst()、setStreamingRequest()、setStreamingBurst()等api可以將request發送到框架。
預覽的request,通過setStreamingRequest()、setStreamingBurst()發送,僅調用一次。將request set到repeating request list裡面。只要pending request queue裡面沒有request,就將repeating list裡面的request copy到pending queue裡面。
拍照的request,通過capture()、captureBurst()發送,每次需要拍照都會調用。每次觸發,都會直接將request入到pending request queue裡面,所以拍照的request比預覽的request的優先級更高。
in-progress queue代表當前正在處理的request的queue,每處理完一個,都會從pending queue裡面拿出來一個新的request放到這裡。
數據返回流程(紫色箭頭線條)
硬體層面返回的數據會放到result裡面返回,會通過session的capture callback回調響應。
3.2 request在HAL的處理方式1.framework發送異步的request到hal。
2.hal必須順序處理request,對於每一個request都要返回timestamp(shutter,也就是幀的生成時間)、metadata、image buffers。
3.對於request引用的每一類steam,必須按FIFO的方式返回result。比如:對於預覽的stream,result id 9必須要先於result id 10返回。但是拍照的stream,當前可以只返回到result id 7,因為拍照和預覽用的stream不一樣。
4.hal需要的信息都通過request攜帶的metadata接收,hal需要返回的信息都通過result攜帶的metadata返回。
HAL處理request的整體流程如下圖。
request處理流程(黑色箭頭線條)
framework異步地submit request到hal,hal依次處理,並返回result。
每個被submit到hal的request都必須攜帶stream。stream分為input stream和output stream:input stream對應的buffer是已有圖像數據的buffer,hal對這些buffer進行reprocess;output stream對應的buffer是empty buffer,hal將生成的圖像數據填充的這些buffer裡面。
input stream處理流程(圖像的INPUT STREAM 1****)
request攜帶input stream及input buffer到hal。
hal進行reprocess,然後新的圖像數據重新填充到buffer裡面,返回到framework。
output stream處理流程(圖像的OUTPUT STREAM 1****…N)
request攜帶output stream及output buffer到hal。
hal經過一系列模塊的的處理,將圖像數據寫到buffer中,返回到frameworkk
原文連結:https://www.cnblogs.com/blogs-of-lxl/p/10651611.html
相關文章友情推薦
1. Android開發乾貨分享
至此,本篇已結束。轉載網絡的文章,小編覺得很優秀,如有侵權,懇請聯繫小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!
點個在看,方便您使用時快速查看!