有时候网页上的JavaScript
代码被优化器优化过后,会丢失原来的换行,导致整个的代码蜷缩成一行,完全没办法调试,此时我们就需要借助Chrome
或者FireFox
自带的代码调整功能来实现代码的调试了,如下图:
Mac OSX下VirtualBox直接使用物理硬盘作虚拟机磁盘
目前VirtualBox
只能用命令行来建立磁盘才可以使用物理硬盘。
如果是USB磁盘的话,那么需要从"关于本机"->"概览"->"系统报告"->"USB"中找到磁盘的名字,比如"disk2".
我们假定VirtualBox
安装在"
/Applications/VirtualBox.app/
"这个目录下面,要在"~/VirtualBox\ VMs/Ubuntu/
"目录下面生成文件,则执行如下命令:
$ diskutil umountDisk disk2 $ sudo chown `whoami` /dev/disk2 $ mkdir ~/VirtualBox\ VMs $ mkdir ~/VirtualBox\ VMs/Ubuntu $ sudo /Applications/VirtualBox.app/Contents/MacOS/VBoxManage internalcommands createrawvmdk -filename ~/VirtualBox\ VMs/Ubuntu/Ubuntu.vmdk -rawdisk /dev/disk2 $ sudo chown `whoami` ~/VirtualBox\ VMs/Ubuntu/Ubuntu.vmdk
/dev/disk2
表示机器上的第二块硬盘,每次插入新磁盘后,就会出现类似/dev/disk*
的一个路径名。
最后,新建一个虚拟机,然后指定使用刚刚创建的磁盘即可。
参考链接
How do I install Mavericks onto external HD but from inside VirtualBox
macOS Sierra/Catalina/Big Sur支持NTFS/EXT4文件系统
1.安装`HomeBrew`
按照 让Mac也能拥有apt-get类似的功能——Brew 的介绍配置安装`HomeBrew`。
2.安装`osxfuse`/`ext4fuse`/`ntfs-3g`
$ brew install osxfuse # macOS Big Sur 需要确保 osxfuse 的版本大于等于 3.11.2,暂时不要升级到4.x版本,否则可能无法成功挂载 $ brew reinstall osxfuse $ sudo mkdir /usr/local/sbin $ sudo chown -R `whoami` /usr/local/sbin $ brew reinstall ntfs-3g $ brew install ext2fuse $ brew install ext4fuse
卸载命令为:
$ sudo bash /Library/Filesystems/osxfuse.fs/Contents/Resources/uninstall_osxfuse.app/Contents/Resources/Scripts/uninstall_osxfuse.sh
3.挂载磁盘设备
如果是USB磁盘的话,那么需要从"关于本机"->"概览"->"系统报告"->"USB"中找到磁盘的名字,比如"disk2".
这个信息也可以通过在终端执行命令看到:
$ diskutil list /dev/disk0 (internal): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme 500.3 GB disk0 1: EFI EFI 314.6 MB disk0s1 2: Apple_APFS Container disk1 500.0 GB disk0s2 /dev/disk1 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +500.0 GB disk1 Physical Store disk0s2 1: APFS Volume Macintosh HD 473.8 GB disk1s1 2: APFS Volume Preboot 49.2 MB disk1s2 3: APFS Volume Recovery 509.9 MB disk1s3 4: APFS Volume VM 3.2 GB disk1s4 /dev/disk2 (external, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *500.1 GB disk2 1: Linux 500.1 GB disk2s1
如果已知是"EXT4"磁盘格式的话,则使用如下命令:
# 只读挂载 $ sudo ext4fuse /dev/disk2s2 ~/Desktop/disk2s2 # 读写挂载 $ sudo ext4fuse /dev/disk2s2 ~/Desktop/disk2s2 -o rw
如果已知是"NTFS"磁盘格式的话,则使用如下命令:
# 先卸载系统的自动挂载 $ sudo diskutil unmount /dev/disk2 # 读写挂载 $ sudo /usr/local/sbin/mount_ntfs /dev/disk2 ~/Desktop/disk2
参考链接
ubuntu 16.04下载Android源代码
由于众所周知的原因,我们是没办法正常下载Android
的源代码的,因此只能使用国内的镜像来操作了。
1.安装repo
工具
$ sudo apt-get install repo
2.在需要存储代码的地方创建文件夹
$ mkdir ~/Android_Source $ cd ~/Android_Source # 国内手工下载 repo 源代码,解决 “fatal: Cannot get https://gerrit.googlesource.com/git-repo/clone.bundle” $ git clone https://gerrit-googlesource.lug.ustc.edu.cn/git-repo .repo/repo
3.使用镜像下载Android
源代码
omapzoom.org的镜像
$ repo init -u git://git.omapzoom.org/platform/manifest
清华大学的镜像
$ repo init -u https://aosp.tuna.tsinghua.edu.cn/platform/manifest
上面执行之后是拉取全部的代码。
如果要使用某个特定分支的版本的源代码的话,则则初始化的时候指定分支,比如我想要Android 7.0.0_r21
的分支,则执行如下命令
$ repo init -u https://aosp.tuna.tsinghua.edu.cn/platform/manifest -b android-7.0.0_r21
4.同步代码
$ repo sync -j4
5.列出全部分支
$ cd .repo/manifests && git branch -a | cut -d / -f 3
6.切换到指定分支
$ repo start Android_7.0.0_r21 7.0.0_r21 --all
7.查看当前的分支
$ repo branches
8.删除不用的本地分支
$ repo abandon Android_7.0.0_r21
参考链接
WDMyCloud编译TestDisk&PhotoRec 7.0/7.1
1.按照How to successfully build packages for WD My Cloud from source中的介绍,搭建完成WDMyCloud
的编译环境
2.下载TestDisk & PhotoRec 7.1的源代码
$ wget https://www.cgsecurity.org/testdisk-7.1-WIP.tar.bz2
3.解压缩源代码
$ tar -xjf testdisk-7.1-WIP.tar.bz2
4.安装依赖库
$ apt-get install libncurses5-dev $ apt-get install uuid-dev
5.编译源代码
$ cd ~/wdmc-build/testdisk-7.1-WIP $ chroot build $ mount -t proc none /proc $ mount -t devtmpfs none /dev $ mount -t devpts none /dev/pts $ export DEBIAN_FRONTEND=noninteractive $ export DEBCONF_NONINTERACTIVE_SEEN=true $ export LC_ALL=C $ export LANGUAGE=C $ export LANG=C $ export DEB_CFLAGS_APPEND='-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE' $ export DEB_BUILD_OPTIONS=nocheck $ cd ~/testdisk-7.1-WIP $ ./configure $ make
编译好的文件在src
目录下面。
上面的编译方法编译出来的没办法生成安装包,如果需要安装包的版本,可以直接从Debian
源中下载已经适配过的源代码进行编译,目前已经被适配的版本是testdisk_7.0-2
。
使用如下方式编译:
$ cd ~/wdmc-build/64k-wheezy $ chroot build $ mount -t proc none /proc $ mount -t devtmpfs none /dev $ mount -t devpts none /dev/pts $ export DEBIAN_FRONTEND=noninteractive $ export DEBCONF_NONINTERACTIVE_SEEN=true $ export LC_ALL=C $ export LANGUAGE=C $ export LANG=C $ export DEB_CFLAGS_APPEND='-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE' $ export DEB_BUILD_OPTIONS=nocheck $ cd root $ mkdir testdisk_7.0-2 $ cd testdisk_7.0-2 $ apt-get install ca-certificates $ apt-get install dh-autoreconf $ wget http://http.debian.net/debian/pool/main/t/testdisk/testdisk_7.0-2.dsc $ wget http://http.debian.net/debian/pool/main/t/testdisk/testdisk_7.0.orig.tar.bz2 $ wget http://http.debian.net/debian/pool/main/t/testdisk/testdisk_7.0-2.debian.tar.xz $ tar -jxvf testdisk_7.0.orig.tar.bz2 $ xz -d testdisk_7.0-2.debian.tar.xz $ tar -xvf testdisk_7.0-2.debian.tar $ mv debian/ testdisk-7.0/ $ rm -rf testdisk_7.0-2.debian.tar $ cp testdisk_7.0-2.dsc testdisk-7.0/testdisk_7.0-2.dsc $ dpkg-buildpackage -d -b -uc
参考链接
升级Struts2之后报告HTTP Status 500 - java.lang.ClassNotFoundException: org.apache.jsp.index_jsp以及org.apache.jasper.JasperException: Unable to compile class for JSP
升级Struts2
从2.3.20.1
版本升级到2.5.5
版本后可能报告如下错误:
HTTP Status 500 - Unable to compile class for JSP: type Exception report message Unable to compile class for JSP: description The server encountered an internal error that prevented it from fulfilling this request. exception org.apache.jasper.JasperException: Unable to compile class for JSP: An error occurred at line: [38] in the generated java file: [/var/lib/tomcat7/work/Catalina/localhost/Tools/org/apache/jsp/index_jsp.java] The method getJspApplicationContext(ServletContext) is undefined for the type JspFactory Stacktrace: org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:103) org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:366) org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:468) org.apache.jasper.compiler.Compiler.compile(Compiler.java:378) org.apache.jasper.compiler.Compiler.compile(Compiler.java:353) org.apache.jasper.compiler.Compiler.compile(Compiler.java:340) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:657) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:357) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.struts2.dispatcher.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:110) note The full stack trace of the root cause is available in the Apache Tomcat/7.0.52 (Ubuntu) logs.
也有可能发生如下错误信息:
HTTP Status 500 - java.lang.ClassNotFoundException: org.apache.jsp.index_jsp
具体信息如下图:
比较诡异的是,在Tomcat 8
的环境下,是可以正常运行的,但是在Tomcat 7
环境下却会报错。造成这个现象的原因就是在引入的Jar
包中包含了jsp-api.jar
这个Jar
包,只要在最后生成的war
包中排除这个文件即可。
Struts2从2.3.20.1升级到2.5.5版本后报错:ClassNotFoundException: org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter
以前项目一直使用Struts2
从2.3.20.1
版本,这个版本是IntelliJ Idea
新建项目的时候默认指定的版本,但是这个版本存在漏洞,必须进行升级,干脆一不做二不休,直接升级到最新的2.5.5
版本,但是运行的时候报告如下错误信息:
严重: Exception starting filter struts2 java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1892) at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1735) at org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:504) at org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:486) at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:113) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:105) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4958) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5652) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652) at org.apache.catalina.startup.HostConfig.manageApp(HostConfig.java:1863) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:301) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:618) at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:565) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:301) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at sun.rmi.transport.Transport$2.run(Transport.java:202) at sun.rmi.transport.Transport$2.run(Transport.java:199) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:198) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
分析Struts2-2.5.5
的源代码发现
org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter
被更改了目录,变成了
org.apache.struts2.dispatcher.filter.StrutsPrepareAndExecuteFilter
只要如此修改即可。
Ubuntu 16.04下创建IntelliJ IDEA图标快捷方式
一般在安装目录下面或者桌面上创建文件,命名为:idea.desktop
使用vim
编辑该文件
$ vim idea.desktop
内容如下:
[Desktop Entry] Name=IntelliJ IDEA Comment=IntelliJ IDEA Exec=/home/longsky/Application/idea-IU-163.7743.44/bin/idea.sh Icon=/home/longsky/Application/idea-IU-163.7743.44/bin/idea.png Terminal=false Type=Application Categories=Developer;
接着给予这个文件执行权限
$ chmod +x idea.desktop
以后双击这个图标,就可以直接启动IntelliJ IDEA
了。
国内如何访问维基百科(Wikipedia)
目前,维基百科已经被墙了,但是上面毕竟有很多有用的东西。目前找到比较好用的办法,就是下载zim格式的维基百科的离线文件。
首先访问开源免费软件Kiwix所在的网页,地址为: http://wiki.kiwix.org/wiki/Main_Page/zh-cn 在这个网址中下载阅读软件,也可以在本站下载Kiwix-Windows, Kiwix-Mac
其次,下载维基百科对应语言的快照,zim格式的文件,目前中文语言文件的大小是10GB左右的样子。可以直接在 http://wiki.kiwix.org/wiki/Main_Page/zh-cn 这个网站中下载,也可以访问 https://dumps.wikimedia.org/ 在这网页中选择"Kiwix files"这部分的内容去下载。注意里面包含了几乎所有的语言,只需要选择对应的语言即可。
参考链接
解决Apache Archiva下载文件超时的问题
注意: Apache Archiva 2024-02 开始已经停止维护 建议使用 JFrog Artifactory 替代。
最近使用自己搭建的Apache Archiva
来代理Maven
仓库,经常发生失败的情况,观察Archiva
的日志(logs/archiva.log
),看到如下的内容:
2016-11-22 19:52:02,773 [ajp-bio-127.0.0.1-8009-exec-74] WARN org.apache.archiva.proxy.DefaultRepositoryProxyConnectors [] - Transfer error from repository central for artifact org.mockito:mockito-core:2.2.22::jar , continuing to next repository. Error message: Download failure on resource [https://repo.maven.apache.org/maven2/org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar]:GET request of: org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar from central failed (cause: java.net.SocketTimeoutException: Read timed out) 2016-11-22 19:52:02,773 [ajp-bio-127.0.0.1-8009-exec-74] ERROR org.apache.archiva.webdav.ArchivaDavResourceFactory [] - Failures occurred downloading from some remote repositories: central: Download failure on resource [https://repo.maven.apache.org/maven2/org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar]:GET request of: org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar from central failed (cause: java.net.SocketTimeoutException: Read timed out) org.apache.archiva.policies.ProxyDownloadException: Failures occurred downloading from some remote repositories: central: Download failure on resource [https://repo.maven.apache.org/maven2/org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar]:GET request of: org/mockito/mockito-core/2.2.22/mockito-core-2.2.22.jar from central failed (cause: java.net.SocketTimeoutException: Read timed out) at org.apache.archiva.proxy.DefaultRepositoryProxyConnectors.fetchFromProxies(DefaultRepositoryProxyConnectors.java:366) ~[archiva-proxy-2.2.1.jar:?] at org.apache.archiva.webdav.ArchivaDavResourceFactory.fetchContentFromProxies(ArchivaDavResourceFactory.java:820) [archiva-webdav-2.2.1.jar:?] at org.apache.archiva.webdav.ArchivaDavResourceFactory.processRepository(ArchivaDavResourceFactory.java:629) [archiva-webdav-2.2.1.jar:?] at org.apache.archiva.webdav.ArchivaDavResourceFactory.createResource(ArchivaDavResourceFactory.java:325) [archiva-webdav-2.2.1.jar:?] at org.apache.archiva.webdav.RepositoryServlet.service(RepositoryServlet.java:126) [archiva-webdav-2.2.1.jar:?] at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) [servlet-api-3.0.jar:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) [tomcat-catalina-7.0.52.jar:7.0.52] at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:193) [tomcat-coyote-7.0.52.jar:7.0.52] at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607) [tomcat-coyote-7.0.52.jar:7.0.52] at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313) [tomcat-coyote-7.0.52.jar:7.0.52] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_111] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_111]
明显是从https://repo.maven.apache.org/maven2
这个地址下载数据的时候发生了异常。这个仓库的地址是Apache Archiva
中的默认仓库地址。从目前的测试来看,这个地址在国内访问,经常出现问题。对于国内用户来说https://repo1.maven.org/maven2
这个中央仓库的地址是相对来说更加稳定。因此只要在Remote Repositories
中增加这个中央仓库地址即可。
如下图操作:
另外,在添加完成后,顺便在属性中修改一下Download Timeout
,从默认的60
秒修改到600
秒,减少超时的发生即可。
如上操作只能部分解决问题,在现实过程中,依旧会发生失败,失败主要集中在下载https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.gz这个索引文件的时候,这个索引文件有300-400MB
的样子,一次完整的下载基本上是都会失败,要命的是Apache Archiva
在处理这个文件的时候,基本上没有进行任何容错处理。这个时候我们要么修改源代码来修正,要么需要辅助Apache Archiva
完成这个文件的下载。
下面,我们实验通过Linux
定时任务,nginx
,aria2
来实现对Apache Archiva
下载的辅助处理。
1.首先安装必须的软件
$ sudo apt-get install nginx $ sudo apt-get install aria2
2.接下来,配置nginx
$ sudo vim /etc/nginx/sites-enabled/default
整个配置文件的原文如下:
## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.0-fpm: # fastcgi_pass unix:/run/php/php7.0-fpm.sock; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #}
修改后的结果如下:
## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { #listen 80 default_server; #listen [::]:80 default_server; listen 127.0.0.1:8090 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /data/nginx/maven_index; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; resolver 114.114.114.114 218.85.152.99; resolver_timeout 30s; #nginx 不支持if嵌套也不支持多条件判断,因此只能用下面的方式来模拟 #留意判断语句与括号之间的空格,缺失空格会导致语法错误 #if ( ( $host ~* "repo1\.maven\.org" ) && ( $request_uri ~* "maven2/\.index" ) ) { set $flag 0; if ( $host ~* "repo1\.maven\.org" ) { set $flag "${flag}1"; } if ( $request_uri ~* "maven2/\.index" ) { set $flag "${flag}2"; } if ($flag = "012") { proxy_pass http://127.0.0.1:8090$request_uri; } #避免循环重定向问题 if ( $host != "127.0.0.1" ) { proxy_pass http://$host$request_uri; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.0-fpm: # fastcgi_pass unix:/run/php/php7.0-fpm.sock; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #}
接下来,重启nginx
服务。
3.设置定时任务,定时检查远端服务器上的数据是否有更新
任务脚本内容如下:
nginx_dir="/data/nginx" mvn_idx_dir="$nginx_dir/maven2/.index" mvn_idx_dl_dir="$nginx_dir/maven2/.CacheIndex" mvn_idx_name="nexus-maven-repository-index" mvn_idx_path_name="$mvn_idx_dir/$mvn_idx_name" mvn_idx_gz_name="$mvn_idx_dir/$mvn_idx_name.gz" mvn_idx_prop_name="$mvn_idx_dir/$mvn_idx_name.properties" mvn_idx_dl_gz_name="$mvn_idx_dl_dir/$mvn_idx_name.gz" mvn_idx_dl_prop_name="$mvn_idx_dl_dir/$mvn_idx_name.properties" log_file="/data/nginx/maven_index_log.txt" dt_fmt="`date "+%Y-%m-%d %H:%M:%S"`" if [ ! -d $mvn_idx_dl_dir ]; then mkdir -p $mvn_idx_dl_dir fi if [ ! -d $mvn_idx_dir ]; then mkdir -p $mvn_idx_dir fi if [ ! -d $mvn_idx_dl_dir ]; then echo "$dt_fmt mkdir $mvn_idx_dl_dir failed !" >> $log_file exit -1 fi if [ ! -d $mvn_idx_dir ]; then echo "$dt_fmt mkdir $mvn_idx_dir failed !" >> $log_file exit -1 fi cd $mvn_idx_dl_dir dl_idx="true" dl_prop="true" #echo mvn_idx_gz_name="$mvn_idx_gz_name" #先检查是不是需要下载文件,然后再下载文件,减少不必要的网络请求数据 if [ -f "$mvn_idx_gz_name" ]; then remote_idx_md5=$(curl -s https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.gz.md5) if [ "${#remote_idx_md5}" != "32" ] ; then echo "$dt_fmt download $mvn_idx_gz_name.md5 failed !" >> $log_file exit -1 fi local_idx_md5=$(md5sum $mvn_idx_gz_name | cut -b 1-32) #echo remote_idx_md5="$remote_idx_md5" local_idx_md5="$local_idx_md5" if [ "$remote_idx_md5" = "$local_idx_md5" ] ; then dl_idx="false" if [ -f "$mvn_idx_prop_name" ]; then remote_prop_md5=$(curl -s https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.properties.md5) if [ "${#remote_prop_md5}" != "32" ] ; then echo "$dt_fmt download $mvn_idx_prop_name failed !" >> $log_file exit -1 fi local_prop_md5=$(md5sum $mvn_idx_prop_name | cut -b 1-32) if [ "$remote_prop_md5" = "$local_prop_md5" ] ; then dl_prop="false" echo "$dt_fmt check file success ,no need to update !" >> $log_file exit 0 fi fi fi fi if [ "$dl_idx" = "true" ] ; then aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.gz aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.gz.md5 aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.gz.sha1 fi if [ "$dl_prop" = "true" ] ; then aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.properties aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.properties.md5 aria2c -c https://repo1.maven.org/maven2/.index/nexus-maven-repository-index.properties.sha1 fi #校验下载到的文件 if [ "$dl_idx" = "true" ] ; then dl_idx_md5=$(md5sum $mvn_idx_dl_gz_name | cut -b 1-32) dl_idx_f_md5=$(cat $mvn_idx_dl_gz_name.md5) if [ "$dl_idx_md5" = "$dl_idx_f_md5" ] ; then rm -rf $mvn_idx_gz_name mv $mvn_idx_dl_gz_name $mvn_idx_gz_name rm -rf $mvn_idx_gz_name.md5 mv $mvn_idx_dl_gz_name.md5 $mvn_idx_gz_name.md5 rm -rf $mvn_idx_gz_name.sha1 mv $mvn_idx_dl_gz_name.sha1 $mvn_idx_gz_name.sha1 else echo "$dt_fmt check downloaded index file failed !" >> $log_file cd $nginx_dir rm -rf $mvn_idx_dl_dir fi fi if [ "$dl_prop" = "true" ] ; then dl_prop_md5=$(md5sum $mvn_idx_dl_prop_name | cut -b 1-32) dl_prop_f_md5=$(cat $mvn_idx_dl_prop_name.md5) if [ "$dl_prop_md5" = "$dl_prop_f_md5" ] ; then rm -rf $mvn_idx_prop_name mv $mvn_idx_dl_prop_name $mvn_idx_prop_name rm -rf $mvn_idx_prop_name.md5 mv $mvn_idx_dl_prop_name.md5 $mvn_idx_prop_name.md5 rm -rf $mvn_idx_prop_name.sha1 mv $mvn_idx_dl_prop_name.sha1 $mvn_idx_prop_name.sha1 else echo "$dt_fmt check downloaded properties file failed !" >> $log_file cd $nginx_dir rm -rf $mvn_idx_dl_dir fi fi
默认我们把脚本执行路径为/data/nginx/mvn_index_corn.sh
。
设置定时任务的脚本如下:
chmod +x /data/nginx/mvn_index_corn.sh #write out current crontab crontab -l > addcron #echo new cron into cron file ,每隔30分钟我们调度一次任务,前面是文件锁,防止并发冲突 echo "30 * * * * flock -x -w 10 /dev/shm/mvn_index_corn.lock -c \"sh /data/nginx/mvn_index_corn.sh\"" >> addcron #install new cron file crontab addcron rm addcron
执行上面的脚本。
4.设置Apache Archiva
的代理服务器配置