这是我参与8月更文挑战的第11天,活动详情查看:8月更文挑战

在线教育类的产品中经常会遇到的一个场景就是实时显示学生的答题过程并且支持回溯,通常我们想到的做法就是通过记录坐标和重新绘制来达到产品的要求,再查看了相关资料后知道了Canvas元素的captureStream()API可以实时返回捕捉的画布,那我们就来了解一下这个API的使用吧。

关键API: HTMLCanvasElement.captureStream()

语法:

MediaStream = canvas.captureStream(frameRate);

参数:

  • frameRate 帧捕获速率(FPS)

    • 可选参数
    • 未设置:画布更改时捕获新的一帧。
    • 设置为0:捕获单个帧。
    • 设置为25:每帧捕获速率25的双精度浮点值。

      返回值:

  • MediaStream 对象

    兼容性:

    image.png
    注意:

  • Firefox 41和Firefox 42中需要手动开启,将canvas.capturestream.enabled 设置 true

  • 详细的API还是要参考MDN,我还将一些常见的前端用到的网站进行了汇总也可以通过IT200.CN访问,静态页面不存储任何个人信息。

    Demo演示

    代码为React版本,参考书籍《WebRTC音视频开发》。

准备我们的布局

  1. 准备一个canvas元素来做我们的答题板。
  2. 准备一个video元素来实时显示我们在答题板上的操作。
  3. 准备一个按钮来启动同步显示答题板并进行录制
  4. 准备一个按钮来停止录制

    1. <div className="container">
    2. <div>
    3. <p>画板区</p>
    4. <canvas ref={this.canvasRef}></canvas>
    5. </div>
    6. <div>
    7. <p>视频区</p>
    8. <video ref={this.videoRef} playsInline autoPlay></video>
    9. </div>
    10. <button onClick={this.startCaptureCanvas}>开始</button>
    11. <button onClick={this.stopRecord}>停止</button>
    12. </div>

    看一下流程图

    image.png

    开始实施

  5. 初始化画板答题器

    准备画布:初始化宽高数据,将画布填充一个颜色并指定画笔的粗细和颜色

  1. initCanvas = () => {
  2. canvas = this.canvasRef.current;
  3. canvas.width = 500;
  4. canvas.height = 350;
  5. context = canvas.getContext("2d");
  6. context.fillStyle = "#ccc";
  7. context.fillRect(0, 0, canvas.width, canvas.height);
  8. context.lineWidth = 1;
  9. context.storkeStyle = "#000";
  10. canvas.addEventListener("mousedown", this.startAction);
  11. canvas.addEventListener("mouseup", this.endAction);
  12. };

跟随手指划线:

  1. 初始化画笔原点
  2. 移动画笔绘制轨迹
  3. 结束时移除事件
  1. startAction = (event) => {
  2. context.beginPath();
  3. context.moveTo(event.offsetX, event.offsetY);
  4. context.stroke();
  5. canvas.addEventListener("mousemove", this.moveAction);
  6. };
  7. moveAction = (event) => {
  8. context.lineTo(event.offsetX, event.offsetY);
  9. context.stroke();
  10. };
  11. endAction = () => {
  12. canvas.removeEventListener("mousemove", this.moveAction);
  13. };
  1. streamcanvas流向video

    1. startCaptureCanvas = async (e) => {
    2. stream = canvas.captureStream(25);
    3. const video = this.videoRef.current;
    4. video.srcObject = stream;
    5. };
  2. 启动答题板录制

  • start设置数值的作用是录制的媒体按指定大小切块,避免内容过大。
  • ondataavailable:保存每次回调的数据块
    1. startRecord = (stream) => {
    2. recordeBlobs = [];
    3. mediaRecorder = new MediaRecorder(stream, {
    4. mimeType: "video/webm",
    5. });
    6. mediaRecorder.onstop = (event) => {
    7. console.log("录制完成");
    8. };
    9. mediaRecorder.ondataavailable = (event) => {
    10. if (event.data && event.data.size > 0) {
    11. recordeBlobs.push(event.data);
    12. }
    13. };
    14. mediaRecorder.start(100);
    15. };
  1. 停止录制后,清空相关对象获取视频文件

    1. stopRecord = () => {
    2. mediaRecorder.stop();
    3. stream.getTracks().forEach((track) => track.stop());
    4. stream = null;
    5. const blob = new Blob(recordeBlobs, { type: "video/webm" });
    6. const url = window.URL.createObjectURL(blob);
    7. const a = document.createElement("a");
    8. a.style.display = "none";
    9. a.href = url;
    10. a.download = "canvas.webm";
    11. document.body.appendChild(a);
    12. a.click();
    13. setTimeout(() => {
    14. document.body.removeChild(a);
    15. window.URL.revokeObjectURL(url);
    16. }, 100);
    17. };
  2. 完整代码 ```jsx import React from “react”;

let mediaRecorder; let recordeBlobs; let stream; let canvas; let context;

export default class RecordCanvas extends React.Component { constructor() { super(); this.canvasRef = React.createRef(); this.videoRef = React.createRef(); } componentDidMount() { this.initCanvas(); }

initCanvas = () => { canvas = this.canvasRef.current; canvas.width = 500; canvas.height = 350; context = canvas.getContext(“2d”); context.fillStyle = “#ccc”; context.fillRect(0, 0, canvas.width, canvas.height);

  1. context.lineWidth = 1;
  2. context.storkeStyle = "#000";
  3. canvas.addEventListener("mousedown", this.startAction);
  4. canvas.addEventListener("mouseup", this.endAction);

};

startAction = (event) => { context.beginPath(); context.moveTo(event.offsetX, event.offsetY); context.stroke(); canvas.addEventListener(“mousemove”, this.moveAction); };

moveAction = (event) => { context.lineTo(event.offsetX, event.offsetY); context.stroke(); };

endAction = () => { canvas.removeEventListener(“mousemove”, this.moveAction); };

startCaptureCanvas = async (e) => { stream = canvas.captureStream(25); const video = this.videoRef.current; video.srcObject = stream; this.startRecord(stream); };

startRecord = (stream) => { recordeBlobs = []; mediaRecorder = new MediaRecorder(stream, { mimeType: “video/webm”, }); mediaRecorder.onstop = (event) => { console.log(“录制完成”); }; mediaRecorder.ondataavailable = (event) => { if (event.data && event.data.size > 0) { recordeBlobs.push(event.data); } }; mediaRecorder.start(100); }; stopRecord = () => { mediaRecorder.stop(); stream.getTracks().forEach((track) => track.stop()); stream = null; const blob = new Blob(recordeBlobs, { type: “video/webm” }); const url = window.URL.createObjectURL(blob); const a = document.createElement(“a”); a.style.display = “none”; a.href = url; a.download = “canvas.webm”; document.body.appendChild(a); a.click(); setTimeout(() => { document.body.removeChild(a); window.URL.revokeObjectURL(url); }, 100); }; render() { return (

画板区

视频区

); } } ```

效果预览

111.gif

思路扩展

  • API的使用时很简单的,要是对接远程服务器在其他端进行显示还是需要Socket进行加持。