Is it possible to use 3d scans of the face for input? (e.g Kinect v2)

Home Forums DeepFaceLab Development Is it possible to use 3d scans of the face for input? (e.g Kinect v2)

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #7941
    Opthimo
    Participant

      Hey guys 😊

      I wanted to ask if it is possible to use 3d images of a face as input to speed up the processing?

      If possible, I would like to be able to change my face in real time as soon as I stand in front of the Kinect and it has enough data about my face.

      The faces that I would use as a model b I could work out before.

      Would be great if someone could help me with that.

      Greetings

      Thimo

      #8084
      defalafa
      Participant

        real time face is …. deepfacelive .. check for the tool itself on google

        #8309
        DepthMapper
        Participant

          Could this software be used to generate a facial depth map for 3d conversion of 2D video?
          I’m hoping for a feature like that, it could even aid in upscaling.
          All I’m looking for is facial depth maps.

        Viewing 3 posts - 1 through 3 (of 3 total)
        • You must be logged in to reply to this topic.