firebase-ml-model-interpreter 라이브러리 22.0.2 버전에는 기기에서의 커스텀 모델 위치를 가져오는 새로운 getLatestModelFile() 메서드가 도입되었습니다. 이 메서드를 사용하면 TensorFlow Lite Interpreter 객체를 직접 인스턴스화할 수 있습니다. 이 객체는 FirebaseModelInterpreter 래퍼 대신 사용할 수 있습니다.
앞으로는 이 방법이 권장됩니다. TensorFlow Lite 인터프리터 버전이 더 이상 Firebase 라이브러리 버전과 결합되어 있지 않으므로 필요할 경우 새 버전의 TensorFlow Lite로 자유롭게 업그레이드하거나 커스텀 TensorFlow Lite 빌드를 더 간편하게 사용할 수 있습니다.
이 페이지에서는 FirebaseModelInterpreter에서 TensorFlow Lite Interpreter로 마이그레이션하는 방법을 보여줍니다.
1. 프로젝트 종속 항목 업데이트
firebase-ml-model-interpreter 라이브러리 22.0.2 이상 버전과 tensorflow-lite 라이브러리를 포함하도록 프로젝트의 종속 항목을 업데이트합니다.
valremoteModel=FirebaseCustomRemoteModel.Builder("your_model").build()FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener{task->
valmodelFile=task.getResult()if(modelFile!=null){// Instantiate an org.tensorflow.lite.Interpreter object.interpreter=Interpreter(modelFile)}}
Java
FirebaseCustomRemoteModelremoteModel=newFirebaseCustomRemoteModel.Builder("your_model").build();FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener(newOnCompleteListener<File>(){@OverridepublicvoidonComplete(@NonNullTask<File>task){FilemodelFile=task.getResult();if(modelFile!=null){// Instantiate an org.tensorflow.lite.Interpreter object.Interpreterinterpreter=newInterpreter(modelFile);}}});
3. 입력 및 출력 준비 코드 업데이트
FirebaseModelInterpreter를 사용하는 경우 인터프리터를 실행할 때 인터프리터에 FirebaseModelInputOutputOptions 객체를 전달하여 모델의 입력 및 출력 모양을 지정하게 됩니다.
TensorFlow Lite 인터프리터의 경우 모델의 입력 및 출력에 적합한 크기의 ByteBuffer 객체를 할당합니다.
예를 들어 모델의 입력 모양이 [1 224 224 3] float 값이고 출력 모양이 [1 1000] float인 경우 다음과 같이 변경합니다.
이전
Kotlin
valinputOutputOptions=FirebaseModelInputOutputOptions.Builder().setInputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,224,224,3)).setOutputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,1000)).build()valinput=ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())// Then populate with input data.valinputs=FirebaseModelInputs.Builder().add(input).build()interpreter.run(inputs,inputOutputOptions).addOnSuccessListener{outputs->
// ...}.addOnFailureListener{// Task failed with an exception.// ...}
Java
FirebaseModelInputOutputOptionsinputOutputOptions=newFirebaseModelInputOutputOptions.Builder().setInputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,224,224,3}).setOutputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,1000}).build();float[][][][]input=newfloat[1][224][224][3];// Then populate with input data.FirebaseModelInputsinputs=newFirebaseModelInputs.Builder().add(input).build();interpreter.run(inputs,inputOutputOptions).addOnSuccessListener(newOnSuccessListener<FirebaseModelOutputs>(){@OverridepublicvoidonSuccess(FirebaseModelOutputsresult){// ...}}).addOnFailureListener(newOnFailureListener(){@OverridepublicvoidonFailure(@NonNullExceptione){// Task failed with an exception// ...}});
이후
Kotlin
valinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZEvalinputBuffer=ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())// Then populate with input data.valoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZEvaloutputBuffer=ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())interpreter.run(inputBuffer,outputBuffer)
Java
intinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZE;ByteBufferinputBuffer=ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());// Then populate with input data.intoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZE;ByteBufferoutputBuffer=ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());interpreter.run(inputBuffer,outputBuffer);
4. 출력 처리 코드 업데이트
마지막으로 FirebaseModelOutputs 객체의 getOutput() 메서드를 사용하여 모델의 출력을 가져오는 대신 ByteBuffer 출력을 사용 사례에 적합한 구조로 변환합니다.
예를 들어 분류를 수행하는 경우 다음과 같이 변경할 수 있습니다.
이전
Kotlin
valoutput=result.getOutput(0)valprobabilities=output[0]try{valreader=BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))for(probabilityinprobabilities){vallabel:String=reader.readLine()println("$label: $probability")}}catch(e:IOException){// File not found?}
Java
float[][]output=result.getOutput(0);float[]probabilities=output[0];try{BufferedReaderreader=newBufferedReader(newInputStreamReader(getAssets().open("custom_labels.txt")));for(floatprobability:probabilities){Stringlabel=reader.readLine();Log.i(TAG,String.format("%s: %1.4f",label,probability));}}catch(IOExceptione){// File not found?}
이후
Kotlin
modelOutput.rewind()valprobabilities=modelOutput.asFloatBuffer()try{valreader=BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))for(iinprobabilities.capacity()){vallabel:String=reader.readLine()valprobability=probabilities.get(i)println("$label: $probability")}}catch(e:IOException){// File not found?}
Java
modelOutput.rewind();FloatBufferprobabilities=modelOutput.asFloatBuffer();try{BufferedReaderreader=newBufferedReader(newInputStreamReader(getAssets().open("custom_labels.txt")));for(inti=0;i < probabilities.capacity();i++){Stringlabel=reader.readLine();floatprobability=probabilities.get(i);Log.i(TAG,String.format("%s: %1.4f",label,probability));}}catch(IOExceptione){// File not found?}
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-01(UTC)"],[],[],null,["\u003cbr /\u003e\n\nVersion 22.0.2 of the `firebase-ml-model-interpreter` library introduces a new\n`getLatestModelFile()` method, which gets the location on the device of custom\nmodels. You can use this method to directly instantiate a TensorFlow Lite\n`Interpreter` object, which you can use instead of the\n`FirebaseModelInterpreter` wrapper.\n\nGoing forward, this is the preferred approach. Because the TensorFlow Lite\ninterpreter version is no longer coupled with the Firebase library version, you\nhave more flexibility to upgrade to new versions of TensorFlow Lite when you\nwant, or more easily use custom TensorFlow Lite builds.\n\nThis page shows how you can migrate from using `FirebaseModelInterpreter` to the\nTensorFlow Lite `Interpreter`.\n\n1. Update project dependencies\n\nUpdate your project's dependencies to include version 22.0.2 of the\n`firebase-ml-model-interpreter` library (or newer) and the `tensorflow-lite`\nlibrary:\n\nBefore \n\n implementation(\"com.google.firebase:firebase-ml-model-interpreter:22.0.1\")\n\nAfter \n\n implementation(\"com.google.firebase:firebase-ml-model-interpreter:22.0.2\")\n implementation(\"org.tensorflow:tensorflow-lite:2.0.0\")\n\n2. Create a TensorFlow Lite interpreter instead of a FirebaseModelInterpreter\n\nInstead of creating a `FirebaseModelInterpreter`, get the model's location on\ndevice with `getLatestModelFile()` and use it to create a TensorFlow Lite\n`Interpreter`.\n\nBefore \n\nKotlin \n\n val remoteModel = FirebaseCustomRemoteModel.Builder(\"your_model\").build()\n val options = FirebaseModelInterpreterOptions.Builder(remoteModel).build()\n val interpreter = FirebaseModelInterpreter.getInstance(options)\n\nJava \n\n FirebaseCustomRemoteModel remoteModel =\n new FirebaseCustomRemoteModel.Builder(\"your_model\").build();\n FirebaseModelInterpreterOptions options =\n new FirebaseModelInterpreterOptions.Builder(remoteModel).build();\n FirebaseModelInterpreter interpreter = FirebaseModelInterpreter.getInstance(options);\n\nAfter \n\nKotlin \n\n val remoteModel = FirebaseCustomRemoteModel.Builder(\"your_model\").build()\n FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)\n .addOnCompleteListener { task -\u003e\n val modelFile = task.getResult()\n if (modelFile != null) {\n // Instantiate an org.tensorflow.lite.Interpreter object.\n interpreter = Interpreter(modelFile)\n }\n }\n\nJava \n\n FirebaseCustomRemoteModel remoteModel =\n new FirebaseCustomRemoteModel.Builder(\"your_model\").build();\n FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)\n .addOnCompleteListener(new OnCompleteListener\u003cFile\u003e() {\n @Override\n public void onComplete(@NonNull Task\u003cFile\u003e task) {\n File modelFile = task.getResult();\n if (modelFile != null) {\n // Instantiate an org.tensorflow.lite.Interpreter object.\n Interpreter interpreter = new Interpreter(modelFile);\n }\n }\n });\n\n3. Update input and output preparation code\n\nWith `FirebaseModelInterpreter`, you specify the model's input and output shapes\nby passing a `FirebaseModelInputOutputOptions` object to the interpreter when\nyou run it.\n\nFor the TensorFlow Lite interpreter, you instead allocate `ByteBuffer` objects\nwith the right size for your model's input and output.\n\nFor example, if your model has an input shape of \\[1 224 224 3\\] `float` values\nand an output shape of \\[1 1000\\] `float` values, make these changes:\n\nBefore \n\nKotlin \n\n val inputOutputOptions = FirebaseModelInputOutputOptions.Builder()\n .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 224, 224, 3))\n .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 1000))\n .build()\n\n val input = ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())\n // Then populate with input data.\n\n val inputs = FirebaseModelInputs.Builder()\n .add(input)\n .build()\n\n interpreter.run(inputs, inputOutputOptions)\n .addOnSuccessListener { outputs -\u003e\n // ...\n }\n .addOnFailureListener {\n // Task failed with an exception.\n // ...\n }\n\nJava \n\n FirebaseModelInputOutputOptions inputOutputOptions =\n new FirebaseModelInputOutputOptions.Builder()\n .setInputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 224, 224, 3})\n .setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 1000})\n .build();\n\n float[][][][] input = new float[1][224][224][3];\n // Then populate with input data.\n\n FirebaseModelInputs inputs = new FirebaseModelInputs.Builder()\n .add(input)\n .build();\n\n interpreter.run(inputs, inputOutputOptions)\n .addOnSuccessListener(\n new OnSuccessListener\u003cFirebaseModelOutputs\u003e() {\n @Override\n public void onSuccess(FirebaseModelOutputs result) {\n // ...\n }\n })\n .addOnFailureListener(\n new OnFailureListener() {\n @Override\n public void onFailure(@NonNull Exception e) {\n // Task failed with an exception\n // ...\n }\n });\n\nAfter \n\nKotlin \n\n val inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE\n val inputBuffer = ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())\n // Then populate with input data.\n\n val outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE\n val outputBuffer = ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())\n\n interpreter.run(inputBuffer, outputBuffer)\n\nJava \n\n int inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE;\n ByteBuffer inputBuffer =\n ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());\n // Then populate with input data.\n\n int outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE;\n ByteBuffer outputBuffer =\n ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());\n\n interpreter.run(inputBuffer, outputBuffer);\n\n4. Update output handling code\n\nFinally, instead of getting the model's output with the `FirebaseModelOutputs`\nobject's `getOutput()` method, convert the `ByteBuffer` output to whatever\nstructure is convenient for your use case.\n\nFor example, if you're doing classification, you might make changes like the\nfollowing:\n\nBefore \n\nKotlin \n\n val output = result.getOutput(0)\n val probabilities = output[0]\n try {\n val reader = BufferedReader(InputStreamReader(assets.open(\"custom_labels.txt\")))\n for (probability in probabilities) {\n val label: String = reader.readLine()\n println(\"$label: $probability\")\n }\n } catch (e: IOException) {\n // File not found?\n }\n\nJava \n\n float[][] output = result.getOutput(0);\n float[] probabilities = output[0];\n try {\n BufferedReader reader = new BufferedReader(\n new InputStreamReader(getAssets().open(\"custom_labels.txt\")));\n for (float probability : probabilities) {\n String label = reader.readLine();\n Log.i(TAG, String.format(\"%s: %1.4f\", label, probability));\n }\n } catch (IOException e) {\n // File not found?\n }\n\nAfter \n\nKotlin \n\n modelOutput.rewind()\n val probabilities = modelOutput.asFloatBuffer()\n try {\n val reader = BufferedReader(\n InputStreamReader(assets.open(\"custom_labels.txt\")))\n for (i in probabilities.capacity()) {\n val label: String = reader.readLine()\n val probability = probabilities.get(i)\n println(\"$label: $probability\")\n }\n } catch (e: IOException) {\n // File not found?\n }\n\nJava \n\n modelOutput.rewind();\n FloatBuffer probabilities = modelOutput.asFloatBuffer();\n try {\n BufferedReader reader = new BufferedReader(\n new InputStreamReader(getAssets().open(\"custom_labels.txt\")));\n for (int i = 0; i \u003c probabilities.capacity(); i++) {\n String label = reader.readLine();\n float probability = probabilities.get(i);\n Log.i(TAG, String.format(\"%s: %1.4f\", label, probability));\n }\n } catch (IOException e) {\n // File not found?\n }"]]