AntDB重分布(rebalance)性能对比记录一_antdb性能差-程序员宅基地

技术标签: AntDB  Rebalance  PostgreSQL  PGXC  

AntDB重分布(rebalance)性能对比记录一

AntDB集群环境

postgres=# table pgxc_node;
 node_name | node_type | node_port |  node_host   | nodeis_primary | nodeis_preferred |   node_id   
-----------+-----------+-----------+--------------+----------------+------------------+-------------
 cd1       | C         |     39000 | 10.21.20.176 | f              | f                | -1265012607
 cd2       | C         |     39000 | 10.21.20.175 | f              | f                |   455674804
 dn4       | D         |     39002 | 10.21.20.175 | f              | f                |   823103418
 dn2       | D         |     39002 | 10.21.20.176 | f              | f                |   352366662
 dn3       | D         |     39001 | 10.21.20.175 | f              | f                |  -700122826
 dn1       | D         |     39001 | 10.21.20.176 | f              | f                |  -560021589
(6 rows)

测试案例描述

  • 哈希分布->哈希分布
  • 分布键不变,节点变化
  • 表无索引

测试数据准备

--10万
create table t_one_million(id int8, value int8) distribute by hash(id);
create table t_one_million_new(id int8, value int8) distribute by hash(id) to node(dn1, dn2, dn3);
insert into t_one_million select r, r*random()*10::int from generate_series(1,100000)r;
--100万
create table t_ten_million(id int8, value int8) distribute by hash(id);
create table t_ten_million_new(id int8, value int8) distribute by hash(id) to node(dn1, dn2, dn3);
insert into t_ten_million select r, r*random()*10::int from generate_series(1,1000000)r;
--1000万
create table t_hundred_million(id int8, value int8) distribute by hash(id);
create table t_hundred_million_new(id int8, value int8) distribute by hash(id) to node(dn1, dn2, dn3);
insert into t_hundred_million select r, r*random()*10::int from generate_series(1,10000000)r;
--1亿
create table t_thousand_million(id int8, value int8) distribute by hash(id);
create table t_thousand_million_new(id int8, value int8) distribute by hash(id) to node(dn1, dn2, dn3);
insert into t_thousand_million select r, r*random()*10::int from generate_series(1,100000000)r;

10万条数据rebalance

postgres=# --10万,重分布,节点减少 (dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# select node_name, count(1) from t_one_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
node_name | count 
-----------+-------
 dn1       | 25006
 dn2       | 24739
 dn3       | 24973
 dn4       | 25282
(4 rows)

Time: 290.777 ms
postgres=# alter table t_one_million to node(dn1,dn2,dn3);
ALTER TABLE
Time: 1241.957 ms
postgres=# select node_name, count(1) from t_one_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count 
-----------+-------
 dn1       | 33343
 dn2       | 33209
 dn3       | 33448
(3 rows)

Time: 264.952 ms
postgres=# --10万,重分布,节点变化 (dn1,dn2,dn3) -> (dn1,dn3,dn4)
postgres=# alter table t_one_million to node(dn1,dn3,dn4);
ALTER TABLE
Time: 1132.920 ms
postgres=# select node_name, count(1) from t_one_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count 
-----------+-------
 dn1       | 33343
 dn3       | 33209
 dn4       | 33448
(3 rows)

Time: 302.298 ms
postgres=# --10万,重分布,节点增加 (dn1,dn3,dn4) -> (dn1,dn2,dn3,dn4)
postgres=# alter table t_one_million to node(dn1,dn2,dn3,dn4);
ALTER TABLE
Time: 1275.130 ms
postgres=# select node_name, count(1) from t_one_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count 
-----------+-------
 dn1       | 25006
 dn2       | 24739
 dn3       | 24973
 dn4       | 25282
(4 rows)

Time: 299.630 ms

10万条数据插入新表(节点4->3)

postgres=# explain (verbose, analyze, plan_id) insert into t_one_million_new select * from t_one_million;
                                                            QUERY PLAN                                                             
-----------------------------------------------------------------------------------------------------------------------------------
 Cluster Gather  (cost=1001.00..1568.75 rows=1850 width=16) (actual time=150.624..150.624 rows=0 loops=1)
   Plan id: 0
   Remote node: 16385,16386,16387,16388
   ->  Insert on public.t_one_million_new  (cost=1.00..13.75 rows=617 width=16) (actual time=0.087..0.087 rows=0 loops=1)
         Plan id: 1
         Node 16385: (actual time=63.338..63.338 rows=0 loops=1)
         Node 16386: (actual time=143.233..143.233 rows=0 loops=1)
         Node 16387: (actual time=144.489..144.489 rows=0 loops=1)
         Node 16388: (actual time=150.246..150.246 rows=0 loops=1)
         ->  Cluster Reduce  (cost=1.00..13.75 rows=617 width=16) (actual time=0.084..0.084 rows=0 loops=1)
               Plan id: 2
               Reduce: ('[0:2]={16388,16386,16387}'::oid[])[COALESCE(int4abs((hashint8(t_one_million.id) % 3)), 0)]
               Node 16385: (actual time=63.336..63.336 rows=0 loops=1)
               Node 16386: (actual time=1.727..67.580 rows=33209 loops=1)
               Node 16387: (actual time=1.537..68.800 rows=33448 loops=1)
               Node 16388: (actual time=0.050..72.396 rows=33343 loops=1)
               ->  Seq Scan on public.t_one_million  (cost=0.00..7.12 rows=462 width=16) (actual time=0.008..0.008 rows=0 loops=1)
                     Plan id: 3
                     Output: t_one_million.id, t_one_million.value
                     Remote node: 16388,16386,16387,16385
                     Node 16385: (actual time=0.033..8.568 rows=25282 loops=1)
                     Node 16386: (actual time=0.045..10.478 rows=24739 loops=1)
                     Node 16387: (actual time=0.057..9.939 rows=24973 loops=1)
                     Node 16388: (actual time=0.038..11.604 rows=25006 loops=1)
 Planning time: 0.651 ms
 Execution time: 266.723 ms
(26 rows)

Time: 457.419 ms
postgres=# select node_name, count(1) from t_one_million_new a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count 
-----------+-------
 dn1       | 33343
 dn2       | 33209
 dn3       | 33448
(3 rows)

Time: 266.118 ms

100万条数据rebalance

postgres=# --100万,重分布,节点减少 (dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# select node_name, count(1) from t_ten_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
node_name | count  
-----------+--------
 dn1       | 250043
 dn2       | 249051
 dn3       | 250521
 dn4       | 250385
(4 rows)

Time: 991.656 ms
postgres=# alter table t_ten_million to node(dn1,dn2,dn3);
ALTER TABLE
Time: 7308.235 ms
postgres=# select node_name, count(1) from t_ten_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count  
-----------+--------
 dn1       | 333622
 dn2       | 332909
 dn3       | 333469
(3 rows)

Time: 1071.059 ms
postgres=# --100万,重分布,节点变化 (dn1,dn2,dn3) -> (dn1,dn3,dn4)
postgres=# alter table t_ten_million to node(dn1,dn3,dn4);
ALTER TABLE
Time: 7804.080 ms
postgres=# select node_name, count(1) from t_ten_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count  
-----------+--------
 dn1       | 333622
 dn3       | 332909
 dn4       | 333469
(3 rows)

Time: 2010.396 ms
postgres=# --100万,重分布,节点增加 (dn1,dn3,dn4) -> (dn1,dn2,dn3,dn4)
postgres=# alter table t_ten_million to node(dn1,dn2,dn3,dn4);
ALTER TABLE
Time: 7315.380 ms
postgres=# select node_name, count(1) from t_ten_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count  
-----------+--------
 dn1       | 250043
 dn2       | 249051
 dn3       | 250521
 dn4       | 250385
(4 rows)

Time: 1071.580 ms

100万条数据插入新表(节点4->3)

postgres=# --100万,插入新表,(dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# explain (verbose, analyze, plan_id) insert into t_ten_million_new select * from t_ten_million;
                                                               QUERY PLAN                                                                
-----------------------------------------------------------------------------------------------------------------------------------------
 Cluster Gather  (cost=1001.00..307599.69 rows=1000000 width=16) (actual time=1324.788..1324.788 rows=0 loops=1)
   Plan id: 0
   Remote node: 16385,16386,16387,16388
   ->  Insert on public.t_ten_million_new  (cost=1.00..6599.69 rows=333333 width=16) (actual time=0.117..0.117 rows=0 loops=1)
         Plan id: 1
         Node 16385: (actual time=433.217..433.217 rows=0 loops=1)
         Node 16388: (actual time=1183.333..1183.333 rows=0 loops=1)
         Node 16387: (actual time=1243.368..1243.368 rows=0 loops=1)
         Node 16386: (actual time=1324.372..1324.372 rows=0 loops=1)
         ->  Cluster Reduce  (cost=1.00..6599.69 rows=333333 width=16) (actual time=0.113..0.113 rows=0 loops=1)
               Plan id: 2
               Reduce: ('[0:2]={16388,16386,16387}'::oid[])[COALESCE(int4abs((hashint8(t_ten_million.id) % 3)), 0)]
               Node 16385: (actual time=433.216..433.216 rows=0 loops=1)
               Node 16388: (actual time=0.071..453.257 rows=333622 loops=1)
               Node 16387: (actual time=1.504..479.787 rows=333469 loops=1)
               Node 16386: (actual time=1.717..475.200 rows=332909 loops=1)
               ->  Seq Scan on public.t_ten_million  (cost=0.00..3852.00 rows=250000 width=16) (actual time=0.009..0.009 rows=0 loops=1)
                     Plan id: 3
                     Output: t_ten_million.id, t_ten_million.value
                     Remote node: 16388,16386,16387,16385
                     Node 16385: (actual time=0.033..59.441 rows=250385 loops=1)
                     Node 16388: (actual time=0.054..67.587 rows=250043 loops=1)
                     Node 16387: (actual time=0.041..69.449 rows=250521 loops=1)
                     Node 16386: (actual time=0.052..70.767 rows=249051 loops=1)
 Planning time: 0.510 ms
 Execution time: 1355.531 ms
(26 rows)

Time: 1791.471 ms
postgres=# select node_name, count(1) from t_ten_million_new a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name | count  
-----------+--------
 dn1       | 333622
 dn2       | 332909
 dn3       | 333469
(3 rows)

Time: 990.051 ms

1000万条数据rebalance

postgres=# --1000万,重分布,节点减少 (dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# select node_name, count(1) from t_hundred_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count  
-----------+---------
 dn1       | 2501529
 dn2       | 2500572
 dn3       | 2500608
 dn4       | 2497291
(4 rows)

Time: 7489.552 ms
postgres=# alter table t_hundred_million to node(dn1,dn2,dn3);
ALTER TABLE
Time: 67223.163 ms
postgres=# select node_name, count(1) from t_hundred_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count  
-----------+---------
 dn1       | 3337145
 dn2       | 3330709
 dn3       | 3332146
(3 rows)

Time: 9597.601 ms
postgres=# --1000万,重分布,节点变化 (dn1,dn2,dn3) -> (dn1,dn3,dn4)
postgres=# alter table t_hundred_million to node(dn1,dn3,dn4);
ALTER TABLE
Time: 65410.713 ms
postgres=# select node_name, count(1) from t_hundred_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count  
-----------+---------
 dn1       | 3337145
 dn3       | 3330709
 dn4       | 3332146
(3 rows)

Time: 18137.894 ms
postgres=# --1000万,重分布,节点增加 (dn1,dn3,dn4) -> (dn1,dn2,dn3,dn4)
postgres=# alter table t_hundred_million to node(dn1,dn2,dn3,dn4);
ALTER TABLE
Time: 69442.493 ms
postgres=# select node_name, count(1) from t_hundred_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count  
-----------+---------
 dn1       | 2501529
 dn2       | 2500572
 dn3       | 2500608
 dn4       | 2497291
(4 rows)

Time: 7256.681 ms

1000万条数据插入新表(节点4->3)

postgres=# --1000万,插入新表,(dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# explain (verbose, analyze, plan_id) insert into t_hundred_million_new select * from t_hundred_million;
                                                                  QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 Cluster Gather  (cost=1001.00..3066981.06 rows=10000000 width=16) (actual time=17953.717..17953.717 rows=0 loops=1)
   Plan id: 0
   Remote node: 16385,16386,16387,16388
   ->  Insert on public.t_hundred_million_new  (cost=1.00..65981.06 rows=3333333 width=16) (actual time=0.090..0.090 rows=0 loops=1)
         Plan id: 1
         Node 16385: (actual time=8843.154..8843.154 rows=0 loops=1)
         Node 16387: (actual time=17729.698..17729.698 rows=0 loops=1)
         Node 16386: (actual time=17796.154..17796.154 rows=0 loops=1)
         Node 16388: (actual time=17953.132..17953.132 rows=0 loops=1)
         ->  Cluster Reduce  (cost=1.00..65981.06 rows=3333333 width=16) (actual time=0.087..0.087 rows=0 loops=1)
               Plan id: 2
               Reduce: ('[0:2]={16388,16386,16387}'::oid[])[COALESCE(int4abs((hashint8(t_hundred_million.id) % 3)), 0)]
               Node 16385: (actual time=8843.152..8843.152 rows=0 loops=1)
               Node 16387: (actual time=1.639..8005.488 rows=3332146 loops=1)
               Node 16386: (actual time=1.786..4721.854 rows=3330709 loops=1)
               Node 16388: (actual time=0.085..4782.529 rows=3337145 loops=1)
               ->  Seq Scan on public.t_hundred_million  (cost=0.00..38513.75 rows=2500000 width=16) (actual time=0.009..0.009 rows=0 loops=1)
                     Plan id: 3
                     Output: t_hundred_million.id, t_hundred_million.value
                     Remote node: 16388,16386,16387,16385
                     Node 16385: (actual time=0.069..554.526 rows=2497291 loops=1)
                     Node 16387: (actual time=0.076..624.383 rows=2500608 loops=1)
                     Node 16386: (actual time=0.077..663.953 rows=2500572 loops=1)
                     Node 16388: (actual time=0.071..654.229 rows=2501529 loops=1)
 Planning time: 0.504 ms
 Execution time: 17984.772 ms
(26 rows)

Time: 18246.613 ms
postgres=# select node_name, count(1) from t_hundred_million_new a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count  
-----------+---------
 dn1       | 3337145
 dn2       | 3330709
 dn3       | 3332146
(3 rows)

Time: 9113.752 ms

1亿条数据rebalance

postgres=# --1亿,重分布,节点减少 (dn1,dn2,dn3,dn4)->(dn1,dn2,dn3)
postgres=# select node_name, count(1) from t_thousand_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count   
-----------+----------
 dn1       | 25003114
 dn2       | 24994822
 dn3       | 25004264
 dn4       | 24997800
(4 rows)

Time: 85864.177 ms
postgres=# alter table t_thousand_million to node(dn1,dn2,dn3);
ALTER TABLE
Time: 682586.634 ms
postgres=# select node_name, count(1) from t_thousand_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count   
-----------+----------
 dn1       | 33340379
 dn2       | 33327029
 dn3       | 33332592
(3 rows)

Time: 109535.877 ms
postgres=# --1亿,重分布,节点增加 (dn1,dn3,dn4) -> (dn1,dn2,dn3,dn4)
postgres=# alter table t_thousand_million to node(dn1,dn2,dn3,dn4);
ALTER TABLE
Time: 643187.928 ms
postgres=# select node_name, count(1) from t_thousand_million a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count   
-----------+----------
 dn1       | 25003114
 dn2       | 24994822
 dn3       | 25004264
 dn4       | 24997800
(4 rows)

Time: 80667.392 ms

1亿条数据插入新表

postgres=# explain (verbose, analyze, plan_id) insert into t_thousand_million_new select * from t_thousand_million;
                                                                    QUERY PLAN                                                                    
--------------------------------------------------------------------------------------------------------------------------------------------------
 Cluster Gather  (cost=1001.00..30660934.44 rows=100000456 width=16) (actual time=197525.915..197525.915 rows=0 loops=1)
   Plan id: 0
   Remote node: 16385,16386,16387,16388
   ->  Insert on public.t_thousand_million_new  (cost=1.00..659797.64 rows=33333485 width=16) (actual time=0.108..0.108 rows=0 loops=1)
         Plan id: 1
         Node 16385: (actual time=82753.501..82753.501 rows=0 loops=1)
         Node 16386: (actual time=188120.349..188120.349 rows=0 loops=1)
         Node 16388: (actual time=193391.484..193391.484 rows=0 loops=1)
         Node 16387: (actual time=197524.191..197524.191 rows=0 loops=1)
         ->  Cluster Reduce  (cost=1.00..659797.64 rows=33333485 width=16) (actual time=0.106..0.106 rows=0 loops=1)
               Plan id: 2
               Reduce: ('[0:2]={16388,16386,16387}'::oid[])[COALESCE(int4abs((hashint8(t_thousand_million.id) % 3)), 0)]
               Node 16385: (actual time=82753.499..82753.499 rows=0 loops=1)
               Node 16386: (actual time=0.758..43715.150 rows=33327029 loops=1)
               Node 16388: (actual time=0.123..46315.190 rows=33340379 loops=1)
               Node 16387: (actual time=2.445..92578.101 rows=33332592 loops=1)
               ->  Seq Scan on public.t_thousand_million  (cost=0.00..385136.89 rows=25000114 width=16) (actual time=0.010..0.010 rows=0 loops=1)
                     Plan id: 3
                     Output: t_thousand_million.id, t_thousand_million.value
                     Remote node: 16388,16386,16387,16385
                     Node 16385: (actual time=0.093..5551.210 rows=24997800 loops=1)
                     Node 16386: (actual time=0.107..5895.898 rows=24994822 loops=1)
                     Node 16388: (actual time=0.103..6370.683 rows=25003114 loops=1)
                     Node 16387: (actual time=0.090..6057.547 rows=25004264 loops=1)
 Planning time: 0.524 ms
 Execution time: 197551.749 ms
(26 rows)

Time: 197733.173 ms
postgres=# select node_name, count(1) from t_thousand_million_new a, pgxc_node b where a.xc_node_id = b.node_id group by 1 order by 1;
 node_name |  count   
-----------+----------
 dn1       | 33340379
 dn2       | 33327029
 dn3       | 33332592
(3 rows)

Time: 98975.694 ms

AntDB重分布与插入新表性能对比图

重分布与新插入表性能对比图

总结:

如上图所示,在无索引的情况下,用插入一张新表的方法模拟重分布,实际用时约占重分布方式的20%比重。这便意味着AntDB(PGXC)的重分布逻辑尚有优化的空间。

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/constzl/article/details/79221789

智能推荐

【观察】浪潮网络:以场景化创新,重构智慧医疗-程序员宅基地

文章浏览阅读394次。申耀的科技观察读懂科技,赢取未来!毫无疑问,作为本次抗击疫情最重要的“主战场”,医疗机构在这其中无疑发挥了重要的价值。同样,作为医疗机构守护医患健康中的重要工具,医院的信息化、数字化和...

C#语言 求最近数N的2次方数-高效处理 性能第一_c# 求2次方-程序员宅基地

文章浏览阅读1k次。将 N 和 最接近N且小于N的2次元数 以及 最接近N且大于N的2次元数 转换成 二进制 对比看图不难发现 如果将N的尾数全部置1 然后+1 可以得到大于N且最接近N的 二次元数 Max由 Max / 2 或者 右移一位得出 小于N且最接近N的二次元数Min思路理清了 那接下来我们动手开始撸吧~第一步 将N的所有位置1// 公式: 将X的第N位置1//原理很简单 大家对下二进制就知道了 这里不过多阐述x = x | (1 << n)代码如下:N|= N >>_c# 求2次方

python使用mmap模块实现进程间通信与文件映射_mmap-object-程序员宅基地

文章浏览阅读4.2k次,点赞2次,收藏14次。import mmap# 创建一个文件with open('hello.txt', 'wb') as f: f.write(b&amp;amp;amp;quot;Hello Python!\n&amp;amp;amp;quot;)with open('hello.txt', 'r+b') as f: # mmap基本上接收两个参数,(文件描述符,读取长度),size 为0表示读取整个文件 mm = mmap.mmap(f.f..._mmap-object

百度大脑机动车销售发票、车辆合格证以及机打发票OCR正式商用啦_百度文字识别ocr 车辆合格证-程序员宅基地

文章浏览阅读268次。  今天小编给大家分享百度大脑OCR产品机动车销售发票识别、车辆合格证识别、通用机打发票识别于今天上线计费的消息。据了解百度大脑机动车销售发票OCR、车辆合格证OCR以及机打发票OCR注册即可一次性享有500次免费调用量,使用完毕可开通按量后付费或购买次数包。按量后付费,支持随开随停,灵活方便;次数包预付费,一次购买全年使用,价格低至6.4元/千次,下面来看具体内容:  1,机动车销售发票识别  结构化识别机动车销售发票的26个关键字段,包括:发票代码、号码、开票日期、机器编号、购买方信息、车辆_百度文字识别ocr 车辆合格证

Python_代码风格_合理分解代码,提高代码可读性_python 合理分解代码-程序员宅基地

文章浏览阅读164次。一.什么是PEP8PEP 是 Python Enhancement Proposal 的缩写,翻译过来叫“Python 增强规范”。正如我们写文章,会有句式、标点、段落格式、开头缩进等标准的规范一样,Python 书写自然也有一套较为官方的规范。PEP 8 就是这样一种规范,它存在的意义,就是让 Python 更易阅读,换句话,增强代码可读性。二.日常的代码规范缩进规范Python 的缩进其实可以写成很多种,Tab、双空格、四空格、空格和 Tab 混合等。而 PEP 8 规范告诉我们,请选择四个空_python 合理分解代码

“零拷贝”是什么_零拷贝收包-程序员宅基地

文章浏览阅读410次。前言List itemI/O概念 1.缓冲区 2.虚拟内存 3.mmap+write方式 4.sendfile方式Java零拷贝1.MappedByteBuffer2.DirectByteBuffer3.Channel-to-Channel传输Netty零拷贝其他零拷贝总结前言从字面意思理解就是数据不需要来回的拷贝,大大提升了系统的性能;这个词我们也经常在java nio,netty,kafka,RocketMQ等框架中听到,经常作为其提升性能的一_零拷贝收包

随便推点

解决QT+VS中无法打开ui_xxx.h文件_qt中ui_widget.h找不到-程序员宅基地

文章浏览阅读1.1w次,点赞3次,收藏14次。在VS中添加插件Qt VS Tools,就可以在VS中写QT项目了,但是VS中写QT项目和在QT Creater中并不完全一样,VS中的项目文件结构是:但是如图中的widget.h文件中包含了ui_widget.h文件,但提示无法打开ui_widget.h文件,双击ui_widget.h也无法打开此文件。解决办法如图依次点击,然后等待几秒就可以打开该文件。选中ui_widget.h文件..._qt中ui_widget.h找不到

python day14 文件操作_将文本写到mydata.txt文件中-程序员宅基地

文章浏览阅读256次。目录:文件操作文件:什么是文件文件的缺点文件的操作步骤:文件的打开函数 open文件操作分为两种类型的操作:文本文件模式:各操作系统默认的换行符:练习:练习:答案见:二进制文件操作文件操作文件:什么是文件文件是用于数据存储的单位文件通常用来长期存储设置文件中的数据是以字节为单位进行顺序存储的文件的缺点内..._将文本写到mydata.txt文件中

k8s学习笔记_hostname "node01" could not be reached-程序员宅基地

文章浏览阅读1.1k次。kubernetes学习_hostname "node01" could not be reached

RTTI symbol not found for class ‘QObject‘ + double free or corruption_rtti symbol not found for class 'qobject-程序员宅基地

文章浏览阅读4.4k次,点赞2次,收藏6次。记录一个崩溃crash的问题。在方法中使用一个栈类对象时,程序出现崩溃。原因是:QT中 如果一个子级对象是new生成的(堆对象),当父级对象销毁时,会自动调用operator delete删除他的所有子级对象。这样有三种情况:如果一个子级对象是new生成的,他无需自己销毁;如果一个子级对象不是new生成的(栈对象),他在父级对象销毁前自己主动销毁自己,没问题。如果一个子级对象不是new生成的(栈对象),他不在父级对象销毁前自己主动销毁自己,父级对象调用operator delete他时就会出_rtti symbol not found for class 'qobject

软件架构-Spring boot快速开始及核心功能介绍(中)_project.build.sourceencoding-程序员宅基地

文章浏览阅读494次。上次通过Spring boot认知,核心功能。springBoot的搭建【官方向导搭建boot应用】和 【maven的方式搭建boot】。统一父POM管理(一)① 建立boot-parent工程首先我们建立一个 boot-parent的maven工程删除src目录然后修改pom.xml。packaging改为为pom格式。<packaging>pom</packaging>加入dependencyManagement, 同时去掉vers._project.build.sourceencoding

java技术专家【Java学习+面试指南】Java基础入门80问-程序员宅基地

文章浏览阅读701次,点赞11次,收藏22次。java技术专家【Java学习+面试指南】Java基础入门80问