Spring Rabbit - Semaphore permit leak leads to "No available channels" exception -


we use cachingconnectionfactory our consumers. every connection drop i've seen 1 checkoutpermit gets acquired , it's never released. let's if go default cache channel size of 25, next time, when connection recovered after drop, number of available permits 24. after time leads number of permits being 0 hence causing exception amqptimeoutexception("no available channels").

i've observed behavior in versions 1.6.10-release, 1.7.3-release , 2.0.0-build-snapshot.

is possible we're using library in wrong way , should take care of manual releasing of checkoutpermit, possibly closing channels on our own? (releasepermitifnecessary never being called after connection drop)

thanks in advance.


example (using 1.7.3-release)

configuration

@configuration public class config {      @bean     public cachingconnectionfactory cachingconnectionfactory() {         cachingconnectionfactory connectionfactory = new cachingconnectionfactory("localhost");         connectionfactory.setusername("username");         connectionfactory.setpassword("password");         connectionfactory.setvirtualhost("vhost");         connectionfactory.setchannelcheckouttimeout(1200);         connectionfactory.setconnectiontimeout(1000);         connectionfactory.setport(5672);         return connectionfactory;     }      @bean     public simplemessagelistenercontainer simplemessagelistenercontainer(cachingconnectionfactory cachingconnectionfactory) {         simplemessagelistenercontainer container = new simplemessagelistenercontainer();         container.setconnectionfactory(cachingconnectionfactory);         container.setqueuenames("test.queue");         container.setmessagelistener(new messagelisteneradapter(new testhandler()));         container.start();         return container;     }  } 

handler (just sake of testing)

public class testhandler {      public string handlemessage(byte[] textbytes) {         string text = new string(textbytes);         system.out.println("received: " + text);         return text;     }  } 

i test connection drop using proxy between rabbitmq , app, manually break connection rabbitmq.

confirmed.

that's bug. when lose connection lose channels well. therefore have reset permits associated.

please, raise jira ticket proper description.

meanwhile guess workaround should not use etchannelcheckouttimeout(1200) , leave 0, default value:

/**  * sets channel checkout timeout. when greater 0, enables channel limiting  * in {@link #channelcachesize} becomes total number of available channels per  * connection rather simple cache size. note changing {@link #channelcachesize}  * not affect limit on existing connection(s), invoke {@link #destroy()} cause  * new connection created new limit.  * <p>  * since 1.5.5, applies getting connection when cache mode connection.  * @param channelcheckouttimeout timeout in milliseconds; default 0 (channel limiting not enabled).  * @since 1.4.2  * @see #setconnectionlimit(int)  */ public void setchannelcheckouttimeout(long channelcheckouttimeout) { 

Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -