k8s创建、删除pod

创建

使用yaml配置文件进行创建:

kubectl apply -f xxx.yaml

删除

使用yaml配置文件进行删除:

kubectl delete -f xxx.yaml

普通删除指定pod:

kubectl delete pod [pod名称] -n [namespace名称]

如果一直处于Terminating的状态且删除卡死,可增加参数 –force –grace-period=0 强制删除:

kubectl delete pod [pod名称] -n [namespace名称] --force --grace-period=0

启动keepalived时报“Cant find interface xxx for vrrp_instance VI_1”

使用 systemctl status keepalived 名称查看运行情况:

这种情况一般时keepalived配置中所指定的网卡不存在引起的。大家使用网上教程的时候要注意记得更改网卡名称,根据实际的网卡名称来进行配置。不指导网卡名称时什么的,可以使用“ip address”命令进行查看:

修改keepalived的配置文件:/etc/keepalived/keepalived.conf
修改后使用以下命令重启keepalived即可:
systemctl restart keepalived

k8s初始化int时出现Initial timeout of 40s passed

可以先使用以下命令查看k8s的启动日志:

systemctl status kubelet

或使用以下命令查看最近的k8s日志:

journalctl -xeu kubelet

可能性一:

pause容器还是用的k8s.gcr.io/pause:3.6,该镜像在国内服务器下载不成功。

所以在初始化前修改/etc/containerd/config.toml的镜像地址参数:

[plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.6"

然后重启containerd:

systemctl restart containerd

可能性二:

查看/var/log/messages的最后日志:

May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.071077   16320 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8s-master01\" not found"
May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.096124   16320 kubelet.go:2394] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.160241   16320 kubelet.go:2469] "Error getting node" err="node \"k8s-master01\" not found"
May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.262056   16320 kubelet.go:2469] "Error getting node" err="node \"k8s-master01\" not found"
May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.362486   16320 kubelet.go:2469] "Error getting node" err="node \"k8s-master01\" not found"
May  7 14:34:26 k8s-master01 kubelet: E0507 14:34:26.463165   16320 kubelet.go:2469] "Error getting node" err="node \"k8s-master01\" not found"

这是由于还没有安装网络插件flannel引起的。

可能性三:kubernates与docker的cgroup驱动不一致

查看/var/log/messages的最后日志:

Aug 23 18:17:21 localhost kubelet: E0823 18:17:21.059626    9490 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Aug 23 18:17:21 localhost systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Aug 23 18:17:21 localhost systemd: Unit kubelet.service entered failed state.
Aug 23 18:17:21 localhost systemd: kubelet.service failed.

从日志看出来,是因为hubernates与docker的cgroup驱动不一致引起的,那么我们只需要将两者的cgroup驱动设置为一直即可。(官方推荐systemd)

我们可以用以下指令确认一下docker的启动方式:

docker system info|grep -i driver

修改docker的cgroup驱动的方法

一、找到docker的配置文件/etc/docker/daemon.json并修改添加如下参数(没有找到则新建一个):

"exec-opts": ["native.cgroupdriver=systemd"]

二、重启docker,并再次检查docker的驱动是否已经变更:

systemctl restart docker
docker system info|grep -i driver

修改kubernates的cgroup驱动为systemd

有两个文件:

一、/etc/sysconfig/kubelet

增加参数–cgroup-driver=systemd:

二、修改10-kubeadm.conf文件(使用该指令查找:find / -name ’10-kubeadm.conf’)

增加参数–cgroup-driver=systemd:

然后重启kubernates服务即可:

systemctl restart kubelet

最后,重置kubeamdm并再次执行k8s初始化指令即可:

kubeadm reset

使用Pinpoint2.5.1对SpringBoot微服务进行跟踪

准备工作:

预先安装好Hbase(pinpoint最高支持1.x,不支持2.x,而1.x又不支持127.0.0.1:16010的web管理界面,脑壳疼)

注意准备jdk11(主服务需要jdk11,但是agent经过测试在jdk8下也能够正常运行)

创建hbase数据表

下载pinpoint的habse数据库初始化脚本

https://github.com/pinpoint-apm/pinpoint/blob/master/hbase/scripts/hbase-create.hbase

然后使用命令 hbase shell hbase-create.hbase 进行导入:

下载pinpoint的三个核心微服务:

https://github.com/pinpoint-apm/pinpoint/releases/tag/v2.5.1

运行pinpoint主服务

1、运行数据收集器(默认占用端口8081)

java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-collector-boot-2.5.1.jar

2、运行web管理界面(默认占用端口8080)

java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-web-boot-2.5.1.jar

注意:运行需要jdk11+

然后访问8080端口看是否启动成功:

http://localhost:8080/main

使用pinpoint-agent启动springboot项目

在idea中的springboot启动参数中增加如下vm设置:

-javaagent:D:/pinpoint/pinpoint-agent-2.5.1/pinpoint-bootstrap-2.5.1.jar -Dpinpoint.agentId=127.0.0.1 -Dpinpoint.applicationName=sway-doctor-service-auth

启动springboot项目,出现以下提示则为启动成功:

然后测试请求,并查看pinpoint的web管理界面,即可看见相关的应用:

Win10下安装Hadoop3.1.1和HBase2.2.6

准备条件:已经安装jdk且已经配置了环境变量JAVA_HOME

hadoop跟hbase的版本是有对应关系的,不能乱搭配,可以在这个网址里查看:

https://hbase.apache.org/book.html#java

安装hadoop

1、下载赛hadoop

https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz

下载好后解压,这里以解压到d盘为例,则hadoop的路径为:

D:\hadoop-3.1.1

2、下载hadoop的windows组件

下载以下git路径下的所有文件并覆盖到hadoop安装目录的bin目录下的所有文件:

有很多个版本,请自行选择:

https://github.com/steveloughran/winutils/tree/master/hadoop-3.0.0/bin

https://github.com/selfgrowth/apache-hadoop-3.1.1-winutils

https://github.com/s911415/apache-hadoop-3.1.3-winutils

3、配置hadoop的环境变量

增加HADOOP_HOME为D:\hadoop-3.1.1

增加Path的以下两个环境变量:

1、%HADOOP_HOME%\bin

2、%HADOOP_HOME%\sbin

在CMD中使用hadoop version命令测试配置是否正常:

4、配置hadoop为本地分发

1、编辑etc\hadoop目录下的core-site.xml文件:

<configuration>
	<property>
        <name>hadoop.tmp.dir</name>
        <value>/D:/hadoop-3.1.1/workspace/tmp</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/D:/hadoop-3.1.1/workspace/name</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

2、编辑etc\hadoop目录下的hdfs-site.xml文件:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/D:/hadoop-3.1.1/workspace/data</value>
    </property>
</configuration>

3、编辑etc\hadoop目录下的mapred-site.xml文件:

<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		   <name>mapred.job.tracker</name>
		   <value>hdfs://localhost:9001</value>
	</property>
</configuration>

4、编辑etc\hadoop目录下的yarn-site.xml文件:

<configuration>
   <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

5、格式化节点

在cmd窗口执行命令:hdfs namenode -format

6、启动并访问测试

进入Hadoop的sbin目录下执行start-dfs.cmd启动Hadoop:

使用jdk的jps工具查看相关的hadoop进程是否启动车成功:

访问:http://localhost:9870

访问:http://localhost:8088

安装HBase

1、下载hbase2.2.6

https://hbase.apache.org/downloads.html

https://archive.apache.org/dist/hbase/

此处教程的存放目录是:D:\hbase-2.2.6\

2、配置环境变量

HBASE_HOME:D:\hbase-2.4.17

HBASE_CLASSPATH:D:\hbase-2.4.17\conf

HBASE_BIN_PATH:D:\hbase-2.4.17\bin

添加path环境变量:%HBASE_HOME%\bin

3、修改配置

(1)conf\hbase-env.cmd文件:

JAVA_HOME=D:\Java\jdk1.8.0_351
HBASE_MANAGES_ZK=true

(2)conf\hbase-site.xml

<configuration>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>false</value>
    </property>
	<property>
        <name>hbase.tmp.dir</name>
        <value>/D:/hbase-2.2.6/tmp</value>
    </property>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000/hbase</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>127.0.0.1</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/D:/hbase-2.2.6/zoo</value>
    </property>
    <property>
        <name>hbase.unsafe.stream.capability.enforce</name>
        <value>false</value>
    </property>
</configuration>

4、启动hbase

运行:bin\start-hbase.cmd

5、访问web界面

http://127.0.0.1:16010/master-status

6、进入hbase命令行模式

执行:hbase shell

使用以下命令创建数据表测试是否正常:

create 'student','Sname','Ssex','Sage','Sdept','course'
到web后台中查看是否产生对应的表:

到此结束,享受您的大数据开发之旅吧!

坑合集:

hadoop的data路径设置错误

如果运行start-hbase后hadoop和hbase出现“could only be written to 0 of the 1 minReplication nodes,there are 0 datanode(s) running”,一般是配置文件的data目录配置错误,则可以采取如下办法解决:

1、先用stop-all脚本关闭所有hadoop服务

2、删除hadoop的logs目录和data数据(配置文件里的tmp和data目录)

3、检查hdfs-site.xml配置文件

4、重新执行hdfs格式化命令:hadoop namenode -format

5、使用start-all启动hadoop

6、再次使用start-hbase命令启动hbase即可

hbase的节点错误

启动hbase出现“Master startup cannot progress, in holding-pattern until region onlined.”提示:

则可以采取暴力删除所有节点的方式,即删除hbase中的zoo目录后重启即可,或使用以下命令删除也可以:hbase zkcli -server localhost:2181(未测试)

无法运行hbase shell

无法运行hbase shell命令,出现如下提示:

因为hbase版本过高,降低一点即可,目前网友反应1.4.9和2.2.6版本下没有这个问题。

运行hbase shell缺少类

运行hbase shell提示:java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32

到这里 https://mvnrepository.com/artifact/org.fusesource.jansi/jansi/1.4 下载jar包放进去hbase的lib目录中即可。

没有设置hbase.unsafe.stream.capability.enforce

如果你遇到Please check the config value of ‘hbase.wal.dir’ and ensure it points to a FileSystem mount that has suitable capabilities for output streams

请确保hbase-site.xml中有如下配置:

	<property>
		<name>hbase.unsafe.stream.capability.enforce</name>
		<value>false</value>
	</property>

解决SpringBoot整合Dubbo后启动provider和consumer后的dubbo.cache is not exclusive问题

dubbo的默认缓存文件是在{user.home}/.dubbo/目录下,默认会跟进提供者和消费者生成一些文件。如果在同一台机器上运行多个dubbo项目,则会报此错误。

解决办法是在SpringBoot启动类中加入如下代码,修改dubbo的缓存文件为jar包所在的位置:

@SpringBootApplication
public class ServiceUserApplication {

    public static void main(String[] args) {

        ApplicationHome home = new ApplicationHome(ServiceUserApplication.class);
        File jarFile = home.getSource();
        String dirPath = jarFile.getParentFile().toString();
        String filePath = dirPath + File.separator + ".dubbo";
        System.out.println(filePath);

        System.setProperty("dubbo.meta.cache.filePath", filePath);
        System.setProperty("dubbo.mapping.cache.filePath",filePath);

        SpringApplication.run(ServiceUserApplication.class, args);

    }

}

SpringBoot使用Shiro时用Redis存储用户Session的方法

第一步:引入Shiro-Redis依赖

第一步:引入Shiro-Redis依赖

注意:原来的Shiro依赖可以去掉了,默认的以下依赖已经集成了Shiro

        <!-- https://mvnrepository.com/artifact/org.crazycake/shiro-redis-spring-boot-starter -->
        <dependency>
            <groupId>org.crazycake</groupId>
            <artifactId>shiro-redis-spring-boot-starter</artifactId>
            <version>3.3.1</version>
        </dependency>

第二步:修改ShiroConfi配置文件

注入redisSessionDAO和redisCacheManager:

@Autowired
RedisSessionDAO redisSessionDAO;

@Autowired
RedisCacheManager redisCacheManager;

将它们注入您自己的SessionManager和SessionsSecurityManager:

    //权限管理,配置主要是Realm的管理认证(redis)
    @Bean
    public SessionManager sessionManager() {
        DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
        // inject redisSessionDAO
        sessionManager.setSessionDAO(redisSessionDAO);
        // other stuff...
        return sessionManager;
    }
    @Bean
    public DefaultWebSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
        DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
        //inject sessionManager
        securityManager.setSessionManager(sessionManager);
        // inject redisCacheManager
        securityManager.setCacheManager(redisCacheManager);
        // other stuff...
        return securityManager;
    }

第三步:配置属性 Configuration Properties(可选)

标题 Title默认 Default说明 Description
shiro-redis.enabledtrue启用 shiro-redis 的 Spring 模块
shiro-redis.redis-manager.deploy-modestandaloneRedis 部署模式。选项: standalone, sentinel, ‘集群’
shiro-redis.redis-manager.host127.0.0.1:6379Redis 主机。如果您不指定主机,则默认值为127.0.0.1:6379. 如果你在哨兵模式或集群模式下运行 redis,用逗号分隔主机名,如127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
shiro-redis.redis-manager.master-namemymaster仅用于哨兵模式 Redis哨兵模式的主节点
shiro-redis.redis-manager.timeout2000Redis 连接超时。jedis 尝试连接到 redis 服务器超时(以毫秒为单位)
shiro-redis.redis-manager.so-timeout2000仅用于哨兵模式或集群模式 jedis尝试从redis服务器读取数据的超时时间
shiro-redis.redis-manager.max-attempts3仅用于集群模式 最大尝试连接到服务器
shiro-redis.redis-manager.passwordRedis密码
shiro-redis.redis-manager.database0Redis 数据库。默认值为 0
shiro-redis.redis-manager.count100扫描计数。Shiro-redis 使用 Scan 来获取键,因此您可以定义每次迭代返回的元素数量。
shiro-redis.session-dao.expire-2Redis 缓存键/值过期时间。过期时间以秒为单位。 特殊值: -1: no expire -2:与会话超时相同 默认值:-2 注意:确保过期时间长于会话超时。
shiro-redis.session-dao.key-prefixshiro:session:为会话管理自定义您的 redis 密钥前缀 注意:请记住在前缀末尾添加冒号。
shiro-redis.session-dao.session-in-memory-timeout1000当我们登录时,doReadSession(sessionId)会被 shiro 调用大约 10 次。所以shiro-redis将Session保存在ThreadLocal中来缓解这个问题。sessionInMemoryTimeout 是 ThreadLocal 中 Session 的到期时间。 大多数情况下,您不需要更改它。
shiro-redis.session-dao.session-in-memory-enabledtrue是否在 ThreadLocal 中启用临时保存会话
shiro-redis.cache-manager.principal-id-field-nameid主体 ID 字段名称。您可以获得唯一 ID 来标识此主体的字段。 例如,如果您使用 UserInfo 作为 Principal 类,则 id 字段可能iduserIdemail、 等。 请记住将 getter 添加到此 id 字段。例如,getId(), getUserId(),getEmail()等。 默认值是id,这意味着您的主体对象必须有一个方法调用getId()
shiro-redis.cache-manager.expire1800Redis 缓存键/值过期时间。 过期时间以秒为单位。
shiro-redis.cache-manager.key-prefixshiro:cache:自定义您的 redis 键前缀以进行缓存管理 注意:请记住在前缀末尾添加冒号。

第四步:启动测试

登录程工后在redis中会找到如下会话数据:

SpringBoot2.7.11整合MyBatis+Shiro

引入Shiro依赖

        <!--shiro权限框架-->
        <dependency>
            <groupId>org.apache.shiro</groupId>
            <artifactId>shiro-spring</artifactId>
            <version>1.4.0</version>
        </dependency>

创建pojo实体类

Account.java(账号)

package cn.com.sway.doctor.service.common.model;

import lombok.Data;

@Data
public class Account {

    private String id;
    private String mobile;
    private String password;
    private String name;

}

Role.java(角色)

package cn.com.sway.doctor.service.common.model;

public class Role {

    private String id;
    private String name;

}

Permission.java(权限)

package cn.com.sway.doctor.service.common.model;

public class Permission {

    private String id;
    private String name;

}

创建Mapper

AuthMapper.java

package cn.com.sway.doctor.service.user.mapper;

import cn.com.sway.doctor.service.common.model.Account;
import org.apache.ibatis.annotations.Mapper;

import java.util.List;
import java.util.Map;

@Mapper
public interface AuthMapper {

    Account getAccountByMobile(String mobile);

    List<Map<String,Object>> getAccountPower(String mobile);

}

AuthMapper.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
        PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
        "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="cn.com.sway.doctor.service.user.mapper.AuthMapper">

    <!--通过手机号查找用户信息-->
    <select id="getAccountByMobile" resultType="cn.com.sway.doctor.service.common.model.Account" parameterType="String">
        SELECT *
        FROM account
        WHERE mobile=#{mobile}
    </select>

    <!--通过手机号查找用户权限信息-->
    <select id="getAccountPower" resultType="java.util.HashMap" parameterType="String">
        SELECT account.id, account.mobile, role.name AS role, permission.id AS permission
        FROM account,
             role,
             permission,
             account_role,
             role_permission
        WHERE account.mobile=#{mobile}
          AND account.id=account_role.accountId
          AND account_role.roleId=role.id
          AND role_permission.roleId=role.id
          AND role_permission.permissionId=permission.id
    </select>

</mapper>

创建Service

AuthService.java

package cn.com.sway.doctor.service.user.service;

import cn.com.sway.doctor.service.common.model.Account;

import java.util.List;
import java.util.Map;

public interface AuthService {

    Account getAccountByMobile(String mobile);

    List<Map<String,Object>> getAccountPower(String mobile);

}

AuthServiceImpl.java

package cn.com.sway.doctor.service.user.service.impl;

import cn.com.sway.doctor.service.common.model.Account;
import cn.com.sway.doctor.service.user.mapper.AuthMapper;
import cn.com.sway.doctor.service.user.service.AuthService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;
import java.util.Map;

@Service
public class AuthServiceImpl implements AuthService {

    @Autowired
    AuthMapper authMapper;

    @Override
    public Account getAccountByMobile(String mobile) {
        return authMapper.getAccountByMobile(mobile);
    }

    @Override
    public List<Map<String, Object>> getAccountPower(String mobile) {
        return authMapper.getAccountPower(mobile);
    }
}

创建验证方法

ShiroRealm.java

package cn.com.sway.doctor.service.user.shiro;

import cn.com.sway.doctor.service.common.model.Account;
import cn.com.sway.doctor.service.user.service.AuthService;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.AuthenticationInfo;
import org.apache.shiro.authc.AuthenticationToken;
import org.apache.shiro.authc.SimpleAuthenticationInfo;
import org.apache.shiro.authz.AuthorizationInfo;
import org.apache.shiro.authz.SimpleAuthorizationInfo;
import org.apache.shiro.realm.AuthorizingRealm;
import org.apache.shiro.subject.PrincipalCollection;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
import java.util.Map;

public class ShiroRealm extends AuthorizingRealm {

    @Autowired
    private AuthService loginService;

    /**
     * 获取权限
     * @param principalCollection
     * @return
     */
    @Override
    protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principalCollection) {

        //获取登录用户名
        String mobile = (String) principalCollection.getPrimaryPrincipal();
        //添加角色和权限
        SimpleAuthorizationInfo simpleAuthorizationInfo = new SimpleAuthorizationInfo();

        List<Map<String, Object>> powerList = loginService.getAccountPower(mobile);
        System.out.println(powerList.toString());
        for (Map<String, Object> powerMap : powerList) {
            //添加角色
            simpleAuthorizationInfo.addRole(String.valueOf(powerMap.get("role")));
            //添加权限
            simpleAuthorizationInfo.addStringPermission(String.valueOf(powerMap.get("permission")));
        }
        return simpleAuthorizationInfo;
    }

    /**
     * 登录认证
     * @param authenticationToken
     * @return
     * @throws AuthenticationException
     */
    @Override
    protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken authenticationToken) throws AuthenticationException {
        //加这一步的目的是在Post请求的时候会先进认证,然后在到请求
        if (authenticationToken.getPrincipal() == null) {
            return null;
        }
        //获取用户唯一标识
        String mobile = authenticationToken.getPrincipal().toString();
        //根据用户名去数据库查询用户信息
        Account account = loginService.getAccountByMobile(mobile);
        if (account == null) {
            //这里返回后会报出对应异常
            return null;
        } else {
            //这里验证authenticationToken和simpleAuthenticationInfo的信息
            SimpleAuthenticationInfo simpleAuthenticationInfo = new SimpleAuthenticationInfo(account, account.getPassword(), getName());
            return simpleAuthenticationInfo;
        }
    }

}

创建配置文件

ShiroConfig.java

package cn.com.sway.doctor.service.user.config;

import cn.com.sway.doctor.service.user.shiro.ShiroRealm;
import org.apache.shiro.spring.security.interceptor.AuthorizationAttributeSourceAdvisor;
import org.apache.shiro.spring.web.ShiroFilterFactoryBean;
import org.apache.shiro.web.mgt.DefaultWebSecurityManager;
import org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class ShiroConfig {
    @Bean
    @ConditionalOnMissingBean
    public DefaultAdvisorAutoProxyCreator defaultAdvisorAutoProxyCreator() {
        DefaultAdvisorAutoProxyCreator defaultAAP = new DefaultAdvisorAutoProxyCreator();
        defaultAAP.setProxyTargetClass(true);
        return defaultAAP;
    }

    //将自己的验证方式加入容器
    @Bean
    public ShiroRealm shiroRealm() {
        ShiroRealm shiroRealm = new ShiroRealm();
        return shiroRealm;
    }

    //权限管理,配置主要是Realm的管理认证
    @Bean
    public DefaultWebSecurityManager securityManager() {
        DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager();
        securityManager.setRealm(shiroRealm());
        return securityManager;
    }

    //shiro核心过滤器
    @Bean("shiroFilter")
    public ShiroFilterFactoryBean shirFilter(DefaultWebSecurityManager securityManager) {
        ShiroFilterFactoryBean shiroFilter = new ShiroFilterFactoryBean();
        shiroFilter.setSecurityManager(securityManager);
        return shiroFilter;
    }

    @Bean
    public AuthorizationAttributeSourceAdvisor authorizationAttributeSourceAdvisor(DefaultWebSecurityManager securityManager) {
        AuthorizationAttributeSourceAdvisor authorizationAttributeSourceAdvisor = new AuthorizationAttributeSourceAdvisor();
        authorizationAttributeSourceAdvisor.setSecurityManager(securityManager);
        return authorizationAttributeSourceAdvisor;
    }
}

创建数据

account表:

/*
 Navicat Premium Data Transfer

 Source Server         : localhost
 Source Server Type    : MySQL
 Source Server Version : 50623
 Source Host           : localhost:3306
 Source Schema         : sway-doctor-service-user

 Target Server Type    : MySQL
 Target Server Version : 50623
 File Encoding         : 65001

 Date: 24/04/2023 22:12:38
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for account
-- ----------------------------
DROP TABLE IF EXISTS `account`;
CREATE TABLE `account`  (
  `id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `mobile` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `password` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

SET FOREIGN_KEY_CHECKS = 1;

role表:

/*
 Navicat Premium Data Transfer

 Source Server         : localhost
 Source Server Type    : MySQL
 Source Server Version : 50623
 Source Host           : localhost:3306
 Source Schema         : sway-doctor-service-user

 Target Server Type    : MySQL
 Target Server Version : 50623
 File Encoding         : 65001

 Date: 24/04/2023 22:14:30
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for role
-- ----------------------------
DROP TABLE IF EXISTS `role`;
CREATE TABLE `role`  (
  `id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `name` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

SET FOREIGN_KEY_CHECKS = 1;

permission表:

/*
 Navicat Premium Data Transfer

 Source Server         : localhost
 Source Server Type    : MySQL
 Source Server Version : 50623
 Source Host           : localhost:3306
 Source Schema         : sway-doctor-service-user

 Target Server Type    : MySQL
 Target Server Version : 50623
 File Encoding         : 65001

 Date: 24/04/2023 22:14:19
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for permission
-- ----------------------------
DROP TABLE IF EXISTS `permission`;
CREATE TABLE `permission`  (
  `id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `name` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

SET FOREIGN_KEY_CHECKS = 1;

account_role表:

/*
 Navicat Premium Data Transfer

 Source Server         : localhost
 Source Server Type    : MySQL
 Source Server Version : 50623
 Source Host           : localhost:3306
 Source Schema         : sway-doctor-service-user

 Target Server Type    : MySQL
 Target Server Version : 50623
 File Encoding         : 65001

 Date: 24/04/2023 22:14:08
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for account_role
-- ----------------------------
DROP TABLE IF EXISTS `account_role`;
CREATE TABLE `account_role`  (
  `id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `accountId` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `roleId` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

SET FOREIGN_KEY_CHECKS = 1;

role_permission表:

/*
 Navicat Premium Data Transfer

 Source Server         : localhost
 Source Server Type    : MySQL
 Source Server Version : 50623
 Source Host           : localhost:3306
 Source Schema         : sway-doctor-service-user

 Target Server Type    : MySQL
 Target Server Version : 50623
 File Encoding         : 65001

 Date: 24/04/2023 22:14:36
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for role_permission
-- ----------------------------
DROP TABLE IF EXISTS `role_permission`;
CREATE TABLE `role_permission`  (
  `id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `roleId` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `permissionId` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

SET FOREIGN_KEY_CHECKS = 1;

使用方法

package cn.com.sway.doctor.service.user.controller.api.v1;

import cn.com.sway.doctor.service.common.model.Account;
import cn.com.sway.doctor.service.common.query.AccountQuery;
import cn.com.sway.doctor.service.user.dao.AccountDao;
import org.apache.shiro.authz.annotation.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@RestController
@RequestMapping("/api/v1/accounts")
public class AccountsApiControllerV1 {

    @Autowired
    private AccountDao accountDao;

    //@RequiresAuthentication
    //@RequiresGuest
    //@RequiresUser
    @RequiresRoles("admin")
    //@RequiresPermissions("account-manage")
    @ResponseBody
    @GetMapping("")
    public String query() {

        AccountQuery accountQuery = new AccountQuery();
        List<Account> list = accountDao.query(accountQuery);

        System.out.println(accountDao.query(accountQuery).size());

        return "account total: "+list.size();
    }

}

SpringBoot2.7.11整合MyBatis2.2.2

pom.xml增加如下依赖:

        <!--MyBatis的整合-->
        <!-- https://mvnrepository.com/artifact/org.mybatis.spring.boot/mybatis-spring-boot-starter -->
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>2.2.2</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.33</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/junit/junit -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.13</version>
            <scope>test</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.alibaba/druid-spring-boot-starter -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>1.2.17</version>
        </dependency>

application.properties文件增加如下配置:

spring.datasource.url=jdbc:mysql://localhost:3306/sway-doctor-service-user
spring.datasource.username=root
spring.datasource.password=
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

# 添加并配置第三方数据源Druid
# 数据源类型
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
# 初始化连接数
spring.datasource.initialSize=20
# 最小空闲数
spring.datasource.minIdle=10
# 最大连接数
spring.datasource.maxActive=100

# 配置MyBatis的XML配置文件位置
mybatis.mapper-locations = classpath:mapper/*.xml
# 配置XML映射文件中指定的实体类别名路径
#mybatis.type-aliases-package = cn.com.sway.doctor.service.common
# 开启驼峰命名匹配映射
mybatis.configuration.map-underscore-to-camel-case=true
#mybatis.configuration.log-impl = org.apache.ibatis.logging.stdout.StdOutImpl

编写DataSourceConfig.java数据源配置文件,代码如下:

package cn.com.sway.doctor.service.user.config;

import com.alibaba.druid.pool.DruidDataSource;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import javax.sql.DataSource;

@Configuration
public class DataSourceConfig {
    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource getDruid() {
        return new DruidDataSource();
    }
}

编写实体Account.java文件,代码如下:

package cn.com.sway.doctor.service.common.model;

import lombok.Data;

@Data
public class Account {

    private String id;
    private String mobile;
    private String password;
    private String name;

}

编写查询用额AccountQuery.java文件,代码如下:

package cn.com.sway.doctor.service.common.query;

import lombok.Data;

import javax.validation.constraints.Min;
import javax.validation.constraints.Pattern;

@Data
public class AccountQuery {

    private String id;

    @Pattern(regexp = "1\\d{10}", message = "请输入正确的手机号啊")
    private String mobile;

    private String password;

    @Min(value = 0, message = "offset不能为负数")
    private Integer offset;

    @Min(value = 0, message = "limit不能为负数")
    private Integer  limit;

}

编写AccountDao.java文件,代码如下:

package cn.com.sway.doctor.service.user.dao;

import cn.com.sway.doctor.service.common.model.Account;
import cn.com.sway.doctor.service.common.query.AccountQuery;
import org.apache.ibatis.annotations.Mapper;
import org.springframework.validation.annotation.Validated;

import java.util.List;

@Mapper
public interface AccountDao {

    List<Account> query(@Validated AccountQuery accountQuery);

}

编写AccountMapper.xml,代码如下:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
        PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
        "http://mybatis.org/dtd/mybatis-3-mapper.dtd">

<mapper namespace="cn.com.sway.doctor.service.user.dao.AccountDao">

    <resultMap id="account" type="cn.com.sway.doctor.service.common.model.Account">
        <id property="id" column="id"/>
        <result property="mobile" column="mobile"/>
        <result property="name" column="name"/>
    </resultMap>

    <!-- 查询账号列表 -->
    <select id="query" resultMap="account">
        SELECT * FROM account
            <where>
                <if test="mobile!=null and mobile!=''">
                    and mobile = #{mobile}
                </if>
            </where>
    </select>

</mapper>

编写单元测,代码如下:

package cn.com.sway.doctor.service.user;

import cn.com.sway.doctor.service.common.query.AccountQuery;
import cn.com.sway.doctor.service.user.dao.AccountDao;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

@SpringBootTest
class ServiceUserApplicationTests {

    @Autowired
    private AccountDao accountDao;

    @Test
    public void testAccountMapper(){
        AccountQuery accountQuery = new AccountQuery();
        accountQuery.setMobile("18688814832");
        System.out.println(accountDao.query(accountQuery).size());
    }

}
close