Currently the event cache uses per cpu 'struct nf_conntrack_ecache' which contains a pointer to a 'struct nf_conn'. When it assigns a conntrack to the 'struct nf_conntrack_ecache' in PREROUTING it increases the conntrack refcount, and then it decreases the refcount when it generates the event for this conntrack in POSTROUTING. This means that when we drop a packet in iptables the event cache is never called in POSTROUTING and the refcount is not decreased. The only reason this doesn't introduce a refcount leak in this situation is that the event cache has: if (ct != ecache->ct) __nf_ct_event_cache_init(ct); Which makes sure that we send an event and decrease the refcount for the previous conntrack entry when we start processing a new entry but the event for the previous entry hasn't been sent for some reason (in this case it's because the packet was dropped and never reached POSTROUTING). There's another case that's more critical that fails due to this refcount play that the event cache uses. It's the case of layer 3 protocol handler registration and unregistration. When we register a layer 3 protocol handler we first register at PREROUTING and later at POSTROUTING, which means that we can have packets go through the event cache at PREROUTING but not at POSTROUTING since it wasn't registered yet. This is similar to the case above but what if we unload the layer 3 protocol handler before we recieve another packet (that would generate an event for the previous conntrack and decrease the refcount)? Since we unregister the layer 3 hooks for this protocol it's not possible to receive another packet and we end up with a conntrack entry on the unconfirmed list with a too high refcount. This means that the 'rmmod' of the layer 3 protocol handler will never return and we are stuck waiting for the refcount to decrease for ever. This patch removes the refcount increasing/decreasing for each packet. It also removes the 'struct nf_conn' pointer in 'struct nf_conntrack_ecache' and replaces it with an 'unsigned int' to hold the conntrack id of the conntrack entry we are holding events for. This has two advantages, the id is hopefully more unique than the address of the entry since memory can be reused, and we have no easy way to access the conntrack entry with that id. Not beeing able to easily access the conntrack entry with that id is a good thing since that entry might not exist anymore since we don't play refcount games anymore. __nf_ct_event_cache_init() has been removed and there's no implicit calls to nf_ct_deliver_cached_events() anymore, instead we call nf_ct_deliver_cached_events() at POSTROUTING, destroy_conntrack() and the newly introduced release_conntrack() which is used to send cached events when we stop processing a conntrack entry in the middle of the stack, for example nf_queue. When events are sent by release_conntrack() beeing called we force events to be sent even if the conntrack entry isn't confirmed. We do not call nf_ct_deliver_cached_events() for dropped packets and when new packets come in we'll find old undelivered events for the old conntrack entry, when this happens we just reset the event cache believing that there was a good reason why the events weren't delivered. This patch also exports the registration/unregistration functions instead of the datastructure they operate on. Signed-off-by: Martin Josefsson --- include/linux/skbuff.h | 6 + include/net/netfilter/nf_conntrack_core.h | 2 include/net/netfilter/nf_conntrack_ecache.h | 79 +++++++------------------ net/ipv4/netfilter/nf_conntrack_proto_icmp.c | 2 net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c | 2 net/netfilter/nf_conntrack_core.c | 32 +++++++--- net/netfilter/nf_conntrack_ecache.c | 70 +++++++++------------- net/netfilter/nf_conntrack_ftp.c | 8 +- net/netfilter/nf_conntrack_helper.c | 2 net/netfilter/nf_conntrack_proto_sctp.c | 4 - net/netfilter/nf_conntrack_proto_tcp.c | 6 - net/netfilter/nf_conntrack_proto_udp.c | 2 net/netfilter/nf_conntrack_standalone.c | 5 - net/netfilter/nf_queue.c | 2 net/netfilter/xt_CONNMARK.c | 7 +- 15 files changed, 108 insertions(+), 121 deletions(-) Index: linux-2.6.19-rc3-git4.quilt/include/net/netfilter/nf_conntrack_ecache.h =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/include/net/netfilter/nf_conntrack_ecache.h 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/include/net/netfilter/nf_conntrack_ecache.h 2006-11-02 19:14:34.000000000 +0100 @@ -12,83 +12,52 @@ #include struct nf_conntrack_ecache { - struct nf_conn *ct; + unsigned int id; unsigned int events; }; DECLARE_PER_CPU(struct nf_conntrack_ecache, nf_conntrack_ecache); -#define CONNTRACK_ECACHE(x) (__get_cpu_var(nf_conntrack_ecache).x) +extern int nf_conntrack_register_notifier(struct notifier_block *nb); +extern int nf_conntrack_unregister_notifier(struct notifier_block *nb); +extern int nf_conntrack_expect_register_notifier(struct notifier_block *nb); +extern int nf_conntrack_expect_unregister_notifier(struct notifier_block *nb); -extern struct atomic_notifier_head nf_conntrack_chain; -extern struct atomic_notifier_head nf_conntrack_expect_chain; +extern void nf_ct_deliver_cached_events(struct nf_conn *ct, unsigned int force); -static inline int nf_conntrack_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&nf_conntrack_chain, nb); -} - -static inline int nf_conntrack_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&nf_conntrack_chain, nb); -} - -static inline int -nf_conntrack_expect_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&nf_conntrack_expect_chain, nb); -} - -static inline int -nf_conntrack_expect_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&nf_conntrack_expect_chain, - nb); -} - -extern void nf_ct_deliver_cached_events(const struct nf_conn *ct); -extern void __nf_ct_event_cache_init(struct nf_conn *ct); -extern void nf_ct_event_cache_flush(void); +extern void nf_conntrack_expect_event(enum ip_conntrack_expect_events event, + struct nf_conntrack_expect *exp); static inline void nf_conntrack_event_cache(enum ip_conntrack_events event, - const struct sk_buff *skb) + struct nf_conn *ct) { - struct nf_conn *ct = (struct nf_conn *)skb->nfct; struct nf_conntrack_ecache *ecache; - local_bh_disable(); ecache = &__get_cpu_var(nf_conntrack_ecache); - if (ct != ecache->ct) - __nf_ct_event_cache_init(ct); + if (ecache->id != ct->id) { + /* We are beeing called for a diffrent entry, the old data + probably isn't valid anymore, overwrite it. */ + ecache->id = ct->id; + ecache->events = 0; + } ecache->events |= event; - local_bh_enable(); -} - -static inline void nf_conntrack_event(enum ip_conntrack_events event, - struct nf_conn *ct) -{ - if (nf_ct_is_confirmed(ct) && !nf_ct_is_dying(ct)) - atomic_notifier_call_chain(&nf_conntrack_chain, event, ct); -} - -static inline void -nf_conntrack_expect_event(enum ip_conntrack_expect_events event, - struct nf_conntrack_expect *exp) -{ - atomic_notifier_call_chain(&nf_conntrack_expect_chain, event, exp); } #else /* CONFIG_NF_CONNTRACK_EVENTS */ +extern int nf_conntrack_register_notifier(struct notifier_block *nb) {} +extern int nf_conntrack_unregister_notifier(struct notifier_block *nb) {} +extern int nf_conntrack_expect_register_notifier(struct notifier_block *nb) {} +extern int +nf_conntrack_expect_unregister_notifier(struct notifier_block *nb) {} + static inline void nf_conntrack_event_cache(enum ip_conntrack_events event, - const struct sk_buff *skb) {} -static inline void nf_conntrack_event(enum ip_conntrack_events event, - struct nf_conn *ct) {} -static inline void nf_ct_deliver_cached_events(const struct nf_conn *ct) {} + struct nf_conn *ct) {} static inline void nf_conntrack_expect_event(enum ip_conntrack_expect_events event, struct nf_conntrack_expect *exp) {} -static inline void nf_ct_event_cache_flush(void) {} +static inline void nf_ct_deliver_cached_events(struct nf_conn *ct, + unsigned force) {} #endif /* CONFIG_NF_CONNTRACK_EVENTS */ #endif /*_NF_CONNTRACK_ECACHE_H*/ Index: linux-2.6.19-rc3-git4.quilt/net/ipv4/netfilter/nf_conntrack_proto_icmp.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/ipv4/netfilter/nf_conntrack_proto_icmp.c 2006-11-02 19:14:09.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/ipv4/netfilter/nf_conntrack_proto_icmp.c 2006-11-02 19:14:34.000000000 +0100 @@ -110,7 +110,7 @@ static int icmp_packet(struct nf_conn *c ct->timeout.function((unsigned long)ct); } else { atomic_inc(&ct->proto.icmp.count); - nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, skb); + nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, ct); nf_ct_refresh_acct(ct, ctinfo, skb, nf_ct_icmp_timeout); } Index: linux-2.6.19-rc3-git4.quilt/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c 2006-11-02 19:14:09.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c 2006-11-02 19:14:34.000000000 +0100 @@ -113,7 +113,7 @@ static int icmpv6_packet(struct nf_conn ct->timeout.function((unsigned long)ct); } else { atomic_inc(&ct->proto.icmp.count); - nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, skb); + nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, ct); nf_ct_refresh_acct(ct, ctinfo, skb, nf_ct_icmpv6_timeout); } Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_core.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_core.c 2006-11-02 19:14:33.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_core.c 2006-11-02 19:14:34.000000000 +0100 @@ -311,7 +311,14 @@ destroy_conntrack(struct nf_conntrack *n NF_CT_ASSERT(atomic_read(&nfct->use) == 0); NF_CT_ASSERT(!timer_pending(&ct->timeout)); - nf_conntrack_event(IPCT_DESTROY, ct); + /* Disable softirqs around the calls to the event cache since we might + be called from a diffrent context and interrupted by a softirq. That + might lead to missed events. */ + local_bh_disable(); + nf_conntrack_event_cache(IPCT_DESTROY, ct); + nf_ct_deliver_cached_events(ct, 0); + local_bh_enable(); + set_bit(IPS_DYING_BIT, &ct->status); /* To make sure we don't get any weird locking issues here: @@ -351,6 +358,17 @@ destroy_conntrack(struct nf_conntrack *n nf_conntrack_free(ct); } +/* This is used to release any cached data before we suspend the conntrack + entry with for example nfqueue. Events are forced to be sent even for + unconfirmed entries */ +static void +release_conntrack(struct nf_conntrack *nfct) +{ + struct nf_conn *ct = (struct nf_conn *)nfct; + + nf_ct_deliver_cached_events(ct, 1); +} + static void death_by_timeout(unsigned long ul_conntrack) { struct nf_conn *ct = (void *)ul_conntrack; @@ -484,14 +502,14 @@ __nf_conntrack_confirm(struct sk_buff ** write_unlock_bh(&nf_conntrack_lock); help = nfct_help(ct); if (help && help->helper) - nf_conntrack_event_cache(IPCT_HELPER, *pskb); + nf_conntrack_event_cache(IPCT_HELPER, ct); #ifdef CONFIG_NF_NAT_NEEDED if (test_bit(IPS_SRC_NAT_DONE_BIT, &ct->status) || test_bit(IPS_DST_NAT_DONE_BIT, &ct->status)) - nf_conntrack_event_cache(IPCT_NATINFO, *pskb); + nf_conntrack_event_cache(IPCT_NATINFO, ct); #endif nf_conntrack_event_cache(master_ct(ct) ? - IPCT_RELATED : IPCT_NEW, *pskb); + IPCT_RELATED : IPCT_NEW, ct); return NF_ACCEPT; out: @@ -614,6 +632,7 @@ __nf_conntrack_alloc(const struct nf_con atomic_set(&conntrack->ct_general.use, 1); conntrack->ct_general.destroy = destroy_conntrack; + conntrack->ct_general.release = release_conntrack; conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig; conntrack->tuplehash[IP_CT_DIR_REPLY].tuple = *repl; /* Don't set timer yet: wait for confirmation */ @@ -833,7 +852,7 @@ nf_conntrack_in(int pf, unsigned int hoo } if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status)) - nf_conntrack_event_cache(IPCT_STATUS, *pskb); + nf_conntrack_event_cache(IPCT_STATUS, ct); return ret; } @@ -895,7 +914,7 @@ void __nf_ct_refresh_acct(struct nf_conn /* must be unlocked when calling event cache */ if (event) - nf_conntrack_event_cache(event, skb); + nf_conntrack_event_cache(event, ct); } #if defined(CONFIG_NF_CT_NETLINK) || \ @@ -1049,7 +1068,6 @@ void nf_conntrack_cleanup(void) delete... */ synchronize_net(); - nf_ct_event_cache_flush(); i_see_dead_people: nf_conntrack_flush(); if (atomic_read(&nf_conntrack_count) != 0) { Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_ecache.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_ecache.c 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_ecache.c 2006-11-02 19:14:34.000000000 +0100 @@ -32,60 +32,50 @@ ATOMIC_NOTIFIER_HEAD(nf_conntrack_expect DEFINE_PER_CPU(struct nf_conntrack_ecache, nf_conntrack_ecache); -/* deliver cached events and clear cache entry - must be called with locally - * disabled softirqs */ -static inline void -__nf_ct_deliver_cached_events(struct nf_conntrack_ecache *ecache) +int nf_conntrack_register_notifier(struct notifier_block *nb) { - if (nf_ct_is_confirmed(ecache->ct) && !nf_ct_is_dying(ecache->ct) - && ecache->events) - atomic_notifier_call_chain(&nf_conntrack_chain, ecache->events, - ecache->ct); + return atomic_notifier_chain_register(&nf_conntrack_chain, nb); +} - ecache->events = 0; - nf_ct_put(ecache->ct); - ecache->ct = NULL; +int nf_conntrack_unregister_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_unregister(&nf_conntrack_chain, nb); } -/* Deliver all cached events for a particular conntrack. This is called - * by code prior to async packet handling for freeing the skb */ -void nf_ct_deliver_cached_events(const struct nf_conn *ct) +int nf_conntrack_expect_register_notifier(struct notifier_block *nb) { - struct nf_conntrack_ecache *ecache; + return atomic_notifier_chain_register(&nf_conntrack_expect_chain, nb); +} - local_bh_disable(); - ecache = &__get_cpu_var(nf_conntrack_ecache); - if (ecache->ct == ct) - __nf_ct_deliver_cached_events(ecache); - local_bh_enable(); +int nf_conntrack_expect_unregister_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_unregister(&nf_conntrack_expect_chain, nb); } -/* Deliver cached events for old pending events, if current conntrack != old */ -void __nf_ct_event_cache_init(struct nf_conn *ct) +/* deliver cached events and clear cache entry */ +void nf_ct_deliver_cached_events(struct nf_conn *ct, unsigned int force) { struct nf_conntrack_ecache *ecache; - /* take care of delivering potentially old events */ ecache = &__get_cpu_var(nf_conntrack_ecache); - BUG_ON(ecache->ct == ct); - if (ecache->ct) - __nf_ct_deliver_cached_events(ecache); - /* initialize for this conntrack/packet */ - ecache->ct = ct; - nf_conntrack_get(&ct->ct_general); + if (unlikely(ecache->id != ct->id)) { + /* Someone overwrote our precious data, lets sulk a bit and + do nothing... */ + return; + } + + if ((nf_ct_is_confirmed(ct) || force) && ecache->events) { + atomic_notifier_call_chain(&nf_conntrack_chain, ecache->events, + ct); + /* Prevent that we accidentally send the same events multiple + times. */ + ecache->events = 0; + } } -/* flush the event cache - touches other CPU's data and must not be called - * while packets are still passing through the code */ -void nf_ct_event_cache_flush(void) +void nf_conntrack_expect_event(enum ip_conntrack_expect_events event, + struct nf_conntrack_expect *exp) { - struct nf_conntrack_ecache *ecache; - int cpu; - - for_each_possible_cpu(cpu) { - ecache = &per_cpu(nf_conntrack_ecache, cpu); - if (ecache->ct) - nf_ct_put(ecache->ct); - } + atomic_notifier_call_chain(&nf_conntrack_expect_chain, event, exp); } Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_ftp.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_ftp.c 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_ftp.c 2006-11-02 19:14:34.000000000 +0100 @@ -331,7 +331,7 @@ static int find_nl_seq(u32 seq, const st /* We don't update if it's older than what we have. */ static void update_nl_seq(u32 nl_seq, struct ip_ct_ftp_master *info, int dir, - struct sk_buff *skb) + struct nf_conn *ct) { unsigned int i, oldest = NUM_SEQ_TO_REMEMBER; @@ -347,10 +347,10 @@ static void update_nl_seq(u32 nl_seq, st if (info->seq_aft_nl_num[dir] < NUM_SEQ_TO_REMEMBER) { info->seq_aft_nl[dir][info->seq_aft_nl_num[dir]++] = nl_seq; - nf_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, skb); + nf_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, ct); } else if (oldest != NUM_SEQ_TO_REMEMBER) { info->seq_aft_nl[dir][oldest] = nl_seq; - nf_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, skb); + nf_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, ct); } } @@ -538,7 +538,7 @@ out_update_nl: /* Now if this ends in \n, update ftp info. Seq may have been * adjusted by NAT code. */ if (ends_in_nl) - update_nl_seq(seq, ct_ftp_info, dir, *pskb); + update_nl_seq(seq, ct_ftp_info, dir, ct); out: spin_unlock_bh(&nf_ftp_lock); return ret; Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_helper.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_helper.c 2006-11-02 19:14:31.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_helper.c 2006-11-02 19:14:34.000000000 +0100 @@ -92,7 +92,7 @@ static inline int unhelp(struct nf_connt struct nf_conn_help *help = nfct_help(ct); if (help && help->helper == me) { - nf_conntrack_event(IPCT_HELPER, ct); + nf_conntrack_event_cache(IPCT_HELPER, ct); help->helper = NULL; } return 0; Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_sctp.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_proto_sctp.c 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_sctp.c 2006-11-02 19:14:34.000000000 +0100 @@ -419,7 +419,7 @@ static int sctp_packet(struct nf_conn *c conntrack->proto.sctp.state = newconntrack; if (oldsctpstate != newconntrack) - nf_conntrack_event_cache(IPCT_PROTOINFO, skb); + nf_conntrack_event_cache(IPCT_PROTOINFO, conntrack); write_unlock_bh(&sctp_lock); } @@ -430,7 +430,7 @@ static int sctp_packet(struct nf_conn *c && newconntrack == SCTP_CONNTRACK_ESTABLISHED) { DEBUGP("Setting assured bit\n"); set_bit(IPS_ASSURED_BIT, &conntrack->status); - nf_conntrack_event_cache(IPCT_STATUS, skb); + nf_conntrack_event_cache(IPCT_STATUS, conntrack); } return NF_ACCEPT; Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_tcp.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_proto_tcp.c 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_tcp.c 2006-11-02 19:14:34.000000000 +0100 @@ -996,9 +996,9 @@ static int tcp_packet(struct nf_conn *co ? nf_ct_tcp_timeout_max_retrans : *tcp_timeouts[new_state]; write_unlock_bh(&tcp_lock); - nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, skb); + nf_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, conntrack); if (new_state != old_state) - nf_conntrack_event_cache(IPCT_PROTOINFO, skb); + nf_conntrack_event_cache(IPCT_PROTOINFO, conntrack); if (!test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status)) { /* If only reply is a RST, we can consider ourselves not to @@ -1019,7 +1019,7 @@ static int tcp_packet(struct nf_conn *co after SYN_RECV or a valid answer for a picked up connection. */ set_bit(IPS_ASSURED_BIT, &conntrack->status); - nf_conntrack_event_cache(IPCT_STATUS, skb); + nf_conntrack_event_cache(IPCT_STATUS, conntrack); } nf_ct_refresh_acct(conntrack, ctinfo, skb, timeout); Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_udp.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_proto_udp.c 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_proto_udp.c 2006-11-02 19:14:34.000000000 +0100 @@ -88,7 +88,7 @@ static int udp_packet(struct nf_conn *co nf_ct_udp_timeout_stream); /* Also, more likely to be important, and not a probe */ if (!test_and_set_bit(IPS_ASSURED_BIT, &conntrack->status)) - nf_conntrack_event_cache(IPCT_STATUS, skb); + nf_conntrack_event_cache(IPCT_STATUS, conntrack); } else nf_ct_refresh_acct(conntrack, ctinfo, skb, nf_ct_udp_timeout); Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_standalone.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_conntrack_standalone.c 2006-11-02 19:14:33.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_conntrack_standalone.c 2006-11-02 19:14:34.000000000 +0100 @@ -643,11 +643,10 @@ void need_conntrack(void) } #ifdef CONFIG_NF_CONNTRACK_EVENTS -EXPORT_SYMBOL_GPL(nf_conntrack_chain); -EXPORT_SYMBOL_GPL(nf_conntrack_expect_chain); EXPORT_SYMBOL_GPL(nf_conntrack_register_notifier); EXPORT_SYMBOL_GPL(nf_conntrack_unregister_notifier); -EXPORT_SYMBOL_GPL(__nf_ct_event_cache_init); +EXPORT_SYMBOL_GPL(nf_conntrack_expect_register_notifier); +EXPORT_SYMBOL_GPL(nf_conntrack_expect_unregister_notifier); EXPORT_PER_CPU_SYMBOL_GPL(nf_conntrack_ecache); EXPORT_SYMBOL_GPL(nf_ct_deliver_cached_events); #endif Index: linux-2.6.19-rc3-git4.quilt/include/linux/skbuff.h =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/include/linux/skbuff.h 2006-11-02 19:14:10.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/include/linux/skbuff.h 2006-11-02 19:14:34.000000000 +0100 @@ -89,6 +89,7 @@ struct net_device; #ifdef CONFIG_NETFILTER struct nf_conntrack { atomic_t use; + void (*release)(struct nf_conntrack *); void (*destroy)(struct nf_conntrack *); }; @@ -1434,6 +1435,11 @@ static inline void nf_conntrack_get(stru if (nfct) atomic_inc(&nfct->use); } +static inline void nf_conntrack_release_cache(struct nf_conntrack *nfct) +{ + if (nfct && nfct->release) + nfct->release(nfct); +} #if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) static inline void nf_conntrack_get_reasm(struct sk_buff *skb) { Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_queue.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/nf_queue.c 2006-11-02 19:14:09.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/nf_queue.c 2006-11-02 19:14:34.000000000 +0100 @@ -171,6 +171,8 @@ int nf_queue(struct sk_buff *skb, { struct sk_buff *segs; + nf_conntrack_release_cache(skb->nfct); + if (!skb_is_gso(skb)) return __nf_queue(skb, elem, pf, hook, indev, outdev, okfn, queuenum); Index: linux-2.6.19-rc3-git4.quilt/include/net/netfilter/nf_conntrack_core.h =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/include/net/netfilter/nf_conntrack_core.h 2006-11-02 19:14:32.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/include/net/netfilter/nf_conntrack_core.h 2006-11-02 19:14:34.000000000 +0100 @@ -66,7 +66,7 @@ static inline int nf_conntrack_confirm(s if (ct) { if (!nf_ct_is_confirmed(ct)) ret = __nf_conntrack_confirm(pskb); - nf_ct_deliver_cached_events(ct); + nf_ct_deliver_cached_events(ct, 0); } return ret; } Index: linux-2.6.19-rc3-git4.quilt/net/netfilter/xt_CONNMARK.c =================================================================== --- linux-2.6.19-rc3-git4.quilt.orig/net/netfilter/xt_CONNMARK.c 2006-11-02 19:14:09.000000000 +0100 +++ linux-2.6.19-rc3-git4.quilt/net/netfilter/xt_CONNMARK.c 2006-11-02 19:14:34.000000000 +0100 @@ -31,6 +31,7 @@ MODULE_ALIAS("ipt_CONNMARK"); #include #include #include +#include static unsigned int target(struct sk_buff **pskb, @@ -56,7 +57,8 @@ target(struct sk_buff **pskb, #if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) ip_conntrack_event_cache(IPCT_MARK, *pskb); #else - nf_conntrack_event_cache(IPCT_MARK, *pskb); + nf_conntrack_event_cache(IPCT_MARK, + (struct nf_conn *)(*pskb)->nfct); #endif } break; @@ -68,7 +70,8 @@ target(struct sk_buff **pskb, #if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) ip_conntrack_event_cache(IPCT_MARK, *pskb); #else - nf_conntrack_event_cache(IPCT_MARK, *pskb); + nf_conntrack_event_cache(IPCT_MARK, + (struct nf_conn *)(*pskb)->nfct); #endif } break;