Message ID | 20210714022238.479812-1-kumar.amber@intel.com |
---|---|
State | Deferred |
Headers | show |
Series | [ovs-dev,v1] dpif-netdev: add mfex options to scalar dpif | expand |
Context | Check | Description |
---|---|---|
ovsrobot/apply-robot | fail | apply and check: fail |
ovsrobot/github-robot | fail | github build: failed |
Bleep bloop. Greetings kumar Amber, I am a robot and I have tried out your patch. Thanks for your contribution. I encountered some error that I wasn't expecting. See the details below. build: lib/ovs-atomic-gcc4.7+.h:47:42: error: left-hand operand of comma expression has no effect [-Werror=unused-value] (*(DST) = __atomic_load_n(SRC, ORDER), \ ^ lib/ovs-atomic.h:401:5: note: in expansion of macro 'atomic_read_explicit' atomic_read_explicit(VAR, DST, memory_order_relaxed) ^ lib/dpif-netdev.c:6828:5: note: in expansion of macro 'atomic_read_relaxed' atomic_read_relaxed(&pmd->miniflow_extract_opt, &mfex_func); ^ lib/dpif-netdev.c:6885:31: error: called object 'mfex_func' is not a function or function pointer mf_ret = mfex_func(&single_packet, key, 1, port_no, pmd); ^ lib/dpif-netdev.c:6827:27: note: declared here miniflow_extract_func mfex_func; ^ lib/dpif-netdev.c:6888:17: error: 'n_mfex_opt_hit' undeclared (first use in this function) n_mfex_opt_hit++; ^ lib/dpif-netdev.c:6888:17: note: each undeclared identifier is reported only once for each function it appears in cc1: all warnings being treated as errors make[2]: *** [lib/dpif-netdev.lo] Error 1 make[2]: Leaving directory `/var/lib/jenkins/jobs/0day_robot_upstream_build_from_pw/workspace' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/var/lib/jenkins/jobs/0day_robot_upstream_build_from_pw/workspace' make: *** [all] Error 2 Please check this out. If you feel there has been an error, please email aconole@redhat.com Thanks, 0-day Robot
Hi Kumar, First can you please use the correct subject line for your patches, i.e. include the word PATCH for example. See https://github.com/openvswitch/ovs/blob/master/Documentation/internals/contributing/submitting-patches.rst#email-subject. More specific to this patch, can you add the reason why this patch was sent separately, i.e. potential performance penalty. Also, you should do some Port to Port and Port-VirtualPort-Port (PVP) tests and report the results with and without this patch for the scalar datapath (so no AVX enabled anywhere). This so people can give feedback on this performance regression and make a proper decision. Cheers, Eelco On 14 Jul 2021, at 4:22, kumar Amber wrote: > This commits add the mfex optimized options to be executed as part of scalar DPIF. > > Signed-off-by: kumar Amber <kumar.amber@intel.com> > Acked-by: Flavio Leitner <fbl@sysclose.org> > --- > lib/dpif-netdev.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c > index 10aed2299..c241a2158 100644 > --- a/lib/dpif-netdev.c > +++ b/lib/dpif-netdev.c > @@ -7022,6 +7022,7 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, > size_t n_missed = 0, n_emc_hit = 0, n_phwol_hit = 0, n_mfex_opt_hit = 0; > struct dfc_cache *cache = &pmd->flow_cache; > struct dp_packet *packet; > + struct dp_packet_batch single_packet; > const size_t cnt = dp_packet_batch_size(packets_); > uint32_t cur_min = pmd->ctx.emc_insert_min; > const uint32_t recirc_depth = *recirc_depth_get(); > @@ -7032,6 +7033,11 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, > size_t map_cnt = 0; > bool batch_enable = true; > > + single_packet.count = 1; > + > + miniflow_extract_func mfex_func; > + atomic_read_relaxed(&pmd->miniflow_extract_opt, &mfex_func); > + > atomic_read_relaxed(&pmd->dp->smc_enable_db, &smc_enable_db); > pmd_perf_update_counter(&pmd->perf_stats, > md_is_valid ? PMD_STAT_RECIRC : PMD_STAT_RECV, > @@ -7082,7 +7088,22 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, > } > } > > - miniflow_extract(packet, &key->mf); > + /* Set the count and packet for miniflow_opt with batch_size 1. */ > + if ((mfex_func) && (!md_is_valid)) { > + single_packet.packets[0] = packet; > + int mf_ret; > + > + mf_ret = mfex_func(&single_packet, key, 1, port_no, pmd); > + /* Fallback to original miniflow_extract if there is a miss. */ > + if (mf_ret) { > + n_mfex_opt_hit++; > + } else { > + miniflow_extract(packet, &key->mf); > + } > + } else { > + miniflow_extract(packet, &key->mf); > + } > + > key->len = 0; /* Not computed yet. */ > key->hash = > (md_is_valid == false) > -- > 2.25.1
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 10aed2299..c241a2158 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -7022,6 +7022,7 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, size_t n_missed = 0, n_emc_hit = 0, n_phwol_hit = 0, n_mfex_opt_hit = 0; struct dfc_cache *cache = &pmd->flow_cache; struct dp_packet *packet; + struct dp_packet_batch single_packet; const size_t cnt = dp_packet_batch_size(packets_); uint32_t cur_min = pmd->ctx.emc_insert_min; const uint32_t recirc_depth = *recirc_depth_get(); @@ -7032,6 +7033,11 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, size_t map_cnt = 0; bool batch_enable = true; + single_packet.count = 1; + + miniflow_extract_func mfex_func; + atomic_read_relaxed(&pmd->miniflow_extract_opt, &mfex_func); + atomic_read_relaxed(&pmd->dp->smc_enable_db, &smc_enable_db); pmd_perf_update_counter(&pmd->perf_stats, md_is_valid ? PMD_STAT_RECIRC : PMD_STAT_RECV, @@ -7082,7 +7088,22 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, } } - miniflow_extract(packet, &key->mf); + /* Set the count and packet for miniflow_opt with batch_size 1. */ + if ((mfex_func) && (!md_is_valid)) { + single_packet.packets[0] = packet; + int mf_ret; + + mf_ret = mfex_func(&single_packet, key, 1, port_no, pmd); + /* Fallback to original miniflow_extract if there is a miss. */ + if (mf_ret) { + n_mfex_opt_hit++; + } else { + miniflow_extract(packet, &key->mf); + } + } else { + miniflow_extract(packet, &key->mf); + } + key->len = 0; /* Not computed yet. */ key->hash = (md_is_valid == false)