macOS Mojave(10.14.5) Octave libSVM加速SVM计算


安装完成后,库提供了 svmtrain , svmpredict, libsvmwrite, libsvmread  等函数,可以加速整个 SVM的处理速度。

Octave 中使用的方式如下:

参考链接


乘积累加运算(Multiply Accumulate, MAC)

乘积累加运算英语:Multiply Accumulate, MAC)是在数字信号处理器或一些微处理器中的特殊运算。实现此运算操作的硬件电路单元,被称为“乘数累加器”。这种运算的操作,是将乘法的乘积结果和累加器 A 的值相加,再存入累加器:

若没有使用 MAC 指令,上述的程序可能需要二个指令,但 MAC 指令可以使用一个指令完成。而许多运算(例如卷积运算、点积运算、矩阵运算、数字滤波器运算、乃至多项式的求值运算)都可以分解为数个 MAC 指令,因此可以提高上述运算的效率。

MAC指令的输入及输出的数据类型可以是整数定点数或是浮点数。若处理浮点数时,会有两次的数值修约(Rounding),这在很多典型的DSP上很常见。若一条MAC指令在处理浮点数时只有一次的数值修约,则这种指令称为“融合乘加运算”/“积和熔加运算”(fused multiply-add, FMA)或“熔合乘法累积运算”(fused multiply–accumulate, FMAC)。

积和熔加运算

融合乘加运算的操作和乘积累加的基本一样,对于浮点数的操作也是一条指令完成。但不同的是,非融合乘加的乘积累加运算,处理浮点数时,会先完成b×c的乘积,将其结果数值修约到N个比特,然后才将修约后的结果与寄存器a的数值相加,再把结果修约到N个比特;融合乘加则是先完成a+b×c的操作,获得最终的完整结果后方才修约到N个比特。由于减少了数值修约次数,这种操作可以提高运算结果的精度,以及提高运算效率和速率。

积和融加运算可以显著提升像是这些运算的性能和精度:

积和融加运算通常被依靠用来获取更精确的运算结果。然而,Kahan指出,如果不加思索地使用这种运算操作,在某些情况下可能会带来问题。[1]像是平方差公式x2y2,它等价于 ((x×x) − y×y),若果x与y已知数值,使用积和融加运算来求结果,哪怕x = y时,因为在进行首次乘法操作时无视低位的有效比特,可能会使运算结果出错,如果是多步运算,第一步就出错则会连累后续的运算结果接连出错,比如前述的平方差求值后,再取结果的平方根,那么这个结果也会出错。

参考链接

Simple ARM NEON optimized sin, cos, log and exp

This is the sequel of the single precision SSE optimized sin, cos, log and exp that I wrote some time ago. Adapted to the NEON fpu of my pandaboard. Precision and range are exactly the same than the SSE version, so I won't repeat them.

The code

The functions below are licensed under the zlib license, so you can do basically what you want with them.

  • neon_mathfun.h source code for sin_ps, cos_ps, sincos_ps, exp_ps, log_ps, as straight C.
  • neon_mathfun_test.c Validation+Bench program for those function. Do not forget to run it once.

Performance

Results on a pandaboard with a 1GHz dual-core ARM Cortex A9 (OMAP4), using gcc 4.6.1

command line: gcc -O3 -mfloat-abi=softfp -mfpu=neon -march=armv7-a -mtune=cortex-a9 -Wall -W neon_mathfun_test.c -lm

So performance is not stellar. I recommend to use gcc 4.6.1 or newer as it generates much better code than previous (gcc 4.5) versions -- almost 20% faster here. I believe rewriting these functions in assembly would improve the performance by 30%, and should not be very hard as the ARM and NEON asm is quite nice and easy to write -- maybe I'll do it. Computing two SIMD vectors at once would also help to improve a lot the performance as there are enough registers on NEON, and it would reduce the dependancies between neon instructions.

Note also that I have no idea of the performance on a Cortex A8 -- it may be extremely bad, I don't know.

Comparison with an Intel Atom

For comparison purposes, here is the performance of the SSE version on a single core Intel Atom N270 running at 1.66GHz

command line: cl.exe /arch:SSE /O2 /TP /MD sse_mathfun_test.c (this is msvc 2010)

The number of cycles is quite similar -- but the atom has a higher clock..

Last modified: 2011/05/29

参考链接


Simple ARM NEON optimized sin, cos, log and exp

TI Designs:TIDA-01469 采用无线振动传感器并支持预防性维护的电机监控参考设计

采用无线振动传感器并支持预防性维护的电机监控参考设计
继续阅读TI Designs:TIDA-01469 采用无线振动传感器并支持预防性维护的电机监控参考设计