of 18
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
1
Negligible Cooperation: Contrasting the Maximal-
and Average-Error Cases
Parham Noorzad,
Member, IEEE,
Michael Langberg,
Senior Member, IEEE,
and Michelle Effros,
Fellow, IEEE
Abstract
—In communication networks, cooperative strategies
are coding schemes where network nodes work together to
improve performance metrics such as the total rate delivered
across the network. This work studies
encoder
cooperation in
the setting of a discrete multiple access channel (MAC) with two
encoders and a single decoder. A network node, here called the
cooperation facilitator (CF), that is connected to both encoders
via rate-limited links, enables the cooperation strategy. Previous
work by the authors presents two classes of MACs: (i) one class
where the
average-error
sum-capacity has an infinite derivative in
the limit where CF output link capacities approach zero, and (ii)
a second class of MACs where the
maximal-error
sum-capacity
is not continuous at the point where the output link capacities of
the CF equal zero. This work contrasts the power of the CF in
the maximal- and average-error cases, showing that a
constant
number of bits
communicated over the CF output link can yield
a positive gain in the maximal-error sum-capacity, while a far
greater number of bits, even a number that grows sublinearly
in the blocklength, can never yield a non-negligible gain in the
average-error sum-capacity.
Index Terms
—Capacity
region
continuity,
cooperation
facilitator, edge removal problem, maximal-error capacity
region, multiple access channel.
I. I
NTRODUCTION
Interference is an important limiting factor in the capacities
of many communication networks. One way to reduce
interference is to enable network nodes to work together
to coordinate their transmissions. Strategies that employ
coordinated transmissions are called cooperation strategies.
Perhaps the simplest cooperation strategy is “scheduled
time-sharing” (e.g., [3, Theorem 15.3.2]), where nodes
avoid interference by taking turns transmitting. A popular
alternative model is the “conferencing” cooperation model
[4]; in conferencing, unlike in time-sharing, encoders share
information about the messages they wish to transmit and
use that shared information to coordinate their channel
inputs. In this work, we employ a similar approach, but
in our cooperation model, encoders communicate indirectly.
Specifically, the encoders communicate through another node,
which we call the cooperation facilitator (CF) [5], [6]. Figure 1
This material is based upon work supported by the National Science
Foundation under Grant Numbers 1527524 and 1526771, and has appeared
in part in [1], [2].
P. Noorzad was with the California Institute of Technology, Pasadena, CA
91125 USA. (email: parham.n@outlook.com).
M. Langberg is with the State University of New York at Buffalo, Buffalo,
NY 14260 USA (email: mikel@buffalo.edu).
M. Effros is with the California Institute of Technology, Pasadena, CA
91125 USA (email: effros@caltech.edu).
encoder 1
<latexit sha1_base64="T+TJ3V1aQ1eCyd+wN8B1dYUltpA=">AAACMHicdVDLSgMxFM34rPXV6tJNsAiuykwVdFlw47KKVaEtJZPeaUMzyZDckZahv+JW136Bn6ErcetXmD4W1uqBwOGcG+65J0yksOj7797S8srq2npuI7+5tb2zWyju3VqdGg51rqU29yGzIIWCOgqUcJ8YYHEo4S7sX4z9uwcwVmh1g8MEWjHrKhEJztBJ7cJeE2GAJs5Acd0BQ0dBu1Dyy/4EdJEEM1IiM9TaRW+t2dE8jUEhl8zaRuAn2MqYQcEljPLN1ELCeJ91oeGoYjHYVjYJP6JHTunQSBv3FNKJ+vNHxmJrh3HoJmOGPTvnjRXUWtp/XWOjP81GitF5KxMqSdHdPs0RpZKipuOiaEcY4CiHjjBuhDuF8h4zjKOrc25NF9QkYN41F/zuaZHcVsrBSblydVqqXr9OO8yRA3JIjklAzkiVXJIaqRNOBuSRPJFn78V78z68z+nokjfrfZ/Mwfv6Bpduqjg=</latexit>
CF
<latexit sha1_base64="sYiVLN70YSklSWle06mDQ5uNXbA=">AAACJ3icdVDLSgMxFM34qLW+Wl26GSyCqzJTBV0WCuKyQl/QlpJJ77ShmWRI7ohl6Ge41bVf40506Z+YPhbW6oHA4Zx7uScniAU36Hmfzsbm1nZmJ7ub29s/ODzKF46bRiWaQYMpoXQ7oAYEl9BAjgLasQYaBQJawbg681sPoA1Xso6TGHoRHUoeckbRSp0uwiPqKK3eTvv5olfy5nDXib8kRbJErV9wMt2BYkkEEpmgxnR8L8ZeSjVyJmCa6yYGYsrGdAgdSyWNwPTSeeape26VgRsqbZ9Ed67+3EhpZMwkCuxkRHFkVryZgkoJ86+rTfin2UkwvOmlXMYJgmSLHGEiXFTurB93wDUwFBNLKNPcfsVlI6opQ9viypkhyHnAnG3O/93TOmmWS/5lqXx/VazUlx1mySk5IxfEJ9ekQu5IjTQII4o8kWfy4rw6b86787EY3XCWOydkBc7XNx7spls=</latexit>
encoder2
<latexit sha1_base64="amnIMe54jpMS1OmQN704csupKDU=">AAACMHicdVDLSgMxFM1UrVpfVZdugkVwVWaqoMuCG5cqVoW2lEx6pw3NJENyR1qG/opbXfsFfoauxK1fYTrtwvo4EDicc8M994SJFBZ9/80rLCwuFZdXVktr6xubW+XtnRurU8OhwbXU5i5kFqRQ0ECBEu4SAywOJdyGg7OJf3sPxgqtrnGUQDtmPSUiwRk6qVPeaSEM0cQZKK67YOi41ilX/Kqfg/4mwYxUyAwXnW2v2OpqnsagkEtmbTPwE2xnzKDgEsalVmohYXzAetB0VLEYbDvLw4/pgVO6NNLGPYU0V7//yFhs7SgO3WTMsG/nvImCWkv7r2ts9KfZTDE6bWdCJSm626c5olRS1HRSFO0KAxzlyBHGjXCnUN5nhnF0dc6t6YHKA5Zcc8HPnn6Tm1o1OKrWLo8r9auXaYcrZI/sk0MSkBNSJ+fkgjQIJ0PyQB7Jk/fsvXrv3sd0tODNet8lc/A+vwCZJ6o5</latexit>
multiple
<latexit sha1_base64="j50eSq0oYUc1j23DlWt3Xegd7fU=">AAACL3icdVDLSgMxFM34rPXV6tLNYBFclRkVdFlw41KhrUI7lEx6pw3mMSR31DL4KW517deIG3HrX5i2s7BWDwQO59ybnJw4FdxiELx7C4tLyyurpbXy+sbm1nalutO2OjMMWkwLbW5iakFwBS3kKOAmNUBlLOA6vj0f+9d3YCzXqomjFCJJB4onnFF0Uq9S7SI8oJG5zATyVMBjr1IL6sEE/jwJC1IjBS57VW+l29csk6CQCWptJwxSjHJqkDN3YbmbWUgpu6UD6DiqqAQb5ZPsj/6BU/p+oo07Cv2J+nMjp9LakYzdpKQ4tDPeWEGthf3XNTb50+xkmJxFOVdphqDYNEeSCR+1P+7J73MDDMXIEcoMd1/x2ZAaytC1OfPMANQkYNk1F/7uaZ60j+rhcf3o6qTWaBYdlsge2SeHJCSnpEEuyCVpEUbuyRN5Ji/eq/fmfXif09EFr9jZJTPwvr4BmYmpqw==</latexit>
access
<latexit sha1_base64="O6Uv5Rgq/5DUZugAuL74g3mQ73g=">AAACLXicdVDLSgMxFM1UrbU+2urSzWARXJWZKuiy4MZlhb6gLSWT3mlDM8mQ3BHL0C9xq2u/xoUgbv0N08fCWj0QOJxzL/fkBLHgBj3v3clsbe9kd3N7+f2Dw6NCsXTcMirRDJpMCaU7ATUguIQmchTQiTXQKBDQDia3c7/9ANpwJRs4jaEf0ZHkIWcUrTQoFnoIj6ijlDIGxswGxbJX8RZwN4m/ImWyQn1QcrK9oWJJBBKZoMZ0fS/Gfko1ciZglu8lBmLKJnQEXUsljcD000XymXtulaEbKm2fRHeh/txIaWTMNArsZERxbNa8uYJKCfOvq034p9lNMLzpp1zGCYJkyxxhIlxU7rwld8g1MBRTSyjT3H7FZWOqKUPb5dqZEchFwLxtzv/d0yZpVSv+ZaV6f1WuNVYd5sgpOSMXxCfXpEbuSJ00CSMJeSLP5MV5dd6cD+dzOZpxVjsnZA3O1zeR0qid</latexit>
channel
<latexit sha1_base64="1fI3GfqzAiEVvWXREtrILV2zHgs=">AAACLnicdVDLSgMxFM1UrbW+Wl26GSyCqzJTBV0W3Lis0Be0pWTSO21oJhmSO8Uy9E/c6tqvEVyIWz/D9LGwVg8EDufcyz05QSy4Qc97dzJb2zvZ3dxefv/g8Oi4UDxpGpVoBg2mhNLtgBoQXEIDOQpoxxpoFAhoBeO7ud+agDZcyTpOY+hFdCh5yBlFK/ULhS7CI+ooZSMqJYhZv1Dyyt4C7ibxV6REVqj1i062O1AsiUAiE9SYju/F2EupRs4EzPLdxEBM2ZgOoWOppBGYXrqIPnMvrDJwQ6Xtk+gu1J8bKY2MmUaBnYwojsyaN1dQKWH+dbUJ/zQ7CYa3vZTLOEGQbJkjTISLyp3X5A64BoZiagllmtuvuLYhTRnaMtfODEEuAuZtc/7vnjZJs1L2r8qVh+tStb7qMEfOyDm5JD65IVVyT2qkQRiZkCfyTF6cV+fN+XA+l6MZZ7VzStbgfH0DbxipDg==</latexit>
decoder
<latexit sha1_base64="bt+SrLStlpxsEwE+KsNMd8xKw8M=">AAACLnicdZDNSgMxFIUz/tRa/0ZduhksgqsyUwVdCm5cVrFaaEvJZO60oZlkSO4Uy9A3catrH8GnEFyIWx/D9GdhrV4IHM65ISdfmApu0PffnaXlldXCWnG9tLG5tb3j7u7dGZVpBnWmhNKNkBoQXEIdOQpopBpoEgq4D/uX4/x+ANpwJW9xmEI7oV3JY84oWqvjui2EB9RJHgFTEehRxy37FX8y3qIIZqJMZlPr7DqFVqRYloBEJqgxzcBPsZ1TjZwJGJVamYGUsj7tQtNKSRMw7XxSfeQdWSfyYqXtkehN3J83cpoYM0xCu5lQ7Jm5bOygUsL8m2oT/xk2M4zP2zmXaYYg2bRHnAkPlTfG5EVcA0MxtIIyze1XPNajmjK0MOee6YKcFCxZcsFvTovirloJTirV69Pyxc3rlGGRHJBDckwCckYuyBWpkTphZEAeyRN5dl6cN+fD+ZyuLjkz7vtkbpyvb6I+qck=</latexit>
p
(
y
|
x
1
,x
2
)
<latexit sha1_base64="mRhcZWLOvls/DJYTkTbvDql6gsM=">AAACKHicdZDLSgMxGIUz9VbrrdWlm2ARKkiZqYIuC25cVrEXaIchk2ba0EwyJBnpMPY13Orap/AR3Em3PonptAtr9YfA4Zw/5OTzI0aVtu2plVtb39jcym8Xdnb39g+KpcOWErHEpIkFE7LjI0UY5aSpqWakE0mCQp+Rtj+6meXtRyIVFfxBJxFxQzTgNKAYaWP1okryNPac87FXO/OKZbtqZwNXhbMQZbCYhleyNnt9geOQcI0ZUqrr2JF2UyQ1xYxMCr1YkQjhERqQrpEchUS5aVZ6Ak+N04eBkOZwDTP3540UhUoloW82Q6SHaimbOVoIpv5NpQr+DLuxDq7dlPIo1oTjeY8gZlALOAME+1QSrFliBMKSmq9APEQSYW0wLj0zIDwrWDDknN+cVkWrVnUuqrW7y3L9/n3OMA+OwQmoAAdcgTq4BQ3QBBhE4Bm8gFfrzfqwPq3pfDVnLbgfgaWxvr4BiNOmlQ==</latexit>
w
1
<latexit sha1_base64="nn4wVZNFHhlNJk/oEpG/Us2M77o=">AAACH3icdVBNTwIxFOyiIuIX6NFLIzHxRHbRRI8kXjxiFDCBDemWt0tDt920XQ3Z8BO86tlf48145d9YYA8iOkmTycx7edMJEs60cd2ZU9jY3Cpul3bKu3v7B4eV6lFHy1RRaFPJpXoMiAbOBLQNMxweEwUkDjh0g/HN3O8+gdJMigczScCPSSRYyCgxVrp/HniDSs2tuwvgdeLlpIZytAZVp9gfSprGIAzlROue5ybGz4gyjHKYlvuphoTQMYmgZ6kgMWg/W2Sd4jOrDHEolX3C4IX6cyMjsdaTOLCTMTEjveLNFSMl1/+6Sod/mr3UhNd+xkSSGhB0mSNMOTYSz3vBQ6aAGj6xhFDF7FcwHRFFqLHtrZyJQCwClm1z3u+e1kmnUfcu6o27y1qzlXdYQifoFJ0jD12hJrpFLdRGFEXoBb2iN+fd+XA+na/laMHJd47RCpzZN+FEoo0=</latexit>
C
1
in
<latexit sha1_base64="T4KLJfEd2ITAWaLTqsNOf+au3Zs=">AAACLXicdVDLSgMxFM34qLU+2urSTbAIrspMFXRZ6MaFiwr2Ae04ZNJMG5rHkGSEMvRL3Orar3EhiFt/w3Q6C2v1QOBwzr3k3BPGjGrjuu/OxubWdmGnuFva2z84LFeqR10tE4VJB0smVT9EmjAqSMdQw0g/VgTxkJFeOG0t/N4jUZpKcW9mMfE5GgsaUYyMlYJKuRUMOTITxVMq5g9eUKm5dTcDXCdeTmogRzuoOoXhSOKEE2EwQ1oPPDc2foqUoZiReWmYaBIjPEVjMrBUIE60n2bJ5/DMKiMYSWWfMDBTf26kiGs946GdXKTUK95CMVIy/a+rdPSnOUhMdO3bi+PEEIGXOaKEQSPhoiU4oopgw2aWIKyoPQXiCVIIG9vlyjdjIrKAJduc97unddJt1L2LeuPusta8zTssghNwCs6BB65AE9yANugADBLwBJ7Bi/PqvDkfzudydMPJd47BCpyvb6bJqBA=</latexit>
C
2
in
<latexit sha1_base64="HIv4OI81PEgO3Q0YV4OdtM4/ayE=">AAACLXicdVDLSgMxFE181FofbXXpZrAIrspMFXRZ6MaFiwr2Ae04ZNJMG5rHkGSEMvRL3Orar3EhiFt/w3Q6C2v1QOBwzr3k3BPGjGrjuu9wY3Nru7BT3C3t7R8clivVo66WicKkgyWTqh8iTRgVpGOoYaQfK4J4yEgvnLYWfu+RKE2luDezmPgcjQWNKEbGSkGl3AqGHJmJ4ikV84dGUKm5dTeDs068nNRAjnZQhYXhSOKEE2EwQ1oPPDc2foqUoZiReWmYaBIjPEVjMrBUIE60n2bJ586ZVUZOJJV9wjiZ+nMjRVzrGQ/t5CKlXvEWipGS6X9dpaM/zUFiomvfXhwnhgi8zBElzDHSWbTkjKgi2LCZJQgrak9x8AQphI3tcuWbMRFZwJJtzvvd0zrpNureRb1xd1lr3uYdFsEJOAXnwANXoAluQBt0AAYJeALP4AW+wjf4AT+Xoxsw3zkGK4Bf36iCqBE=</latexit>
C
2
out
<latexit sha1_base64="KRlRERB608Nb9yV+/nOZ2ZcMlM4=">AAACLnicdVDLSsNAFJ34qLW+Ul26CRbBVUmqoMtCNy5cVLAPaGOZTCft0MlMmLkplNA/catrv0ZwIW79DKdpFtbqgYHDOfdyz5wg5kyD675bG5tb24Wd4m5pb//g8MguH7e1TBShLSK5VN0Aa8qZoC1gwGk3VhRHAaedYNJY+J0pVZpJ8QCzmPoRHgkWMoLBSAPbbgz6EYaxilKZwPyxNrArbtXN4KwTLycVlKM5KFuF/lCSJKICCMda9zw3Bj/FChjhdF7qJ5rGmEzwiPYMFTii2k+z6HPn3ChDJ5TKPAFOpv7cSHGk9SwKzOQipl7xFgpIyfW/rtLhn2YvgfDGT5mIE6CCLHOECXdAOouanCFTlACfGYKJYuYrDhljhQmYMlfOjKjIApZMc97vntZJu1b1Lqu1+6tK/S7vsIhO0Rm6QB66RnV0i5qohQiaoif0jF6sV+vN+rA+l6MbVr5zglZgfX0Dso2onA==</latexit>
C
1
out
<latexit sha1_base64="xAh0pUUr8bKE983zsxDLOrwvCWI=">AAACLnicdVDLSsNAFJ34qLW+Wl26CRbBVUmqoMtCNy5cVLAPaGOYTCft0MlMmLkplNA/catrv0ZwIW79DCdtFtbqgYHDOfdyz5wg5kyD47xbG5tb24Wd4m5pb//g8KhcOe5omShC20RyqXoB1pQzQdvAgNNerCiOAk67waSZ+d0pVZpJ8QCzmHoRHgkWMoLBSH653PQHEYaxilKZwPzR9ctVp+YsYK8TNydVlKPlV6zCYChJElEBhGOt+64Tg5diBYxwOi8NEk1jTCZ4RPuGChxR7aWL6HP73ChDO5TKPAH2Qv25keJI61kUmMkspl7xMgWk5PpfV+nwT7OfQHjjpUzECVBBljnChNsg7awme8gUJcBnhmCimPmKTcZYYQKmzJUzIyoWAUumOfd3T+ukU6+5l7X6/VW1cZd3WESn6AxdIBddowa6RS3URgRN0RN6Ri/Wq/VmfVify9ENK985QSuwvr4BsNSomw==</latexit>
w
2
<latexit sha1_base64="gMg60sH8d5WOI6JxgQWDm12ajsc=">AAACH3icdVBNTwIxFHyLiohfoEcvjcTEE9lFEz2SePGIUcAENqRbuktDt920XQ0h/ASvevbXeDNe+TeWZQ8iOkmTycx7edMJEs60cd25U9jY3Cpul3bKu3v7B4eV6lFHy1QR2iaSS/UYYE05E7RtmOH0MVEUxwGn3WB8s/C7T1RpJsWDmSTUj3EkWMgINla6fx40BpWaW3czoHXi5aQGOVqDqlPsDyVJYyoM4Vjrnucmxp9iZRjhdFbup5ommIxxRHuWChxT7U+zrDN0ZpUhCqWyTxiUqT83pjjWehIHdjLGZqRXvIVipOT6X1fp8E+zl5rw2p8ykaSGCrLMEaYcGYkWvaAhU5QYPrEEE8XsVxAZYYWJse2tnImoyAKWbXPe757WSadR9y7qjbvLWrOVd1iCEziFc/DgCppwCy1oA4EIXuAV3px358P5dL6WowUn3zmGFTjzb+L9oo4=</latexit>
ˆ
w
2
<latexit sha1_base64="GSYn3prbwgLqQT4dpiHA+cF+FTI=">AAACJXicdVDLSsNAFJ34qLW+Wl26CRbBVUmqoMuCG5cV7EPaUCbTSTt0HmHmRimhX+FW136NOxFc+StO0yys1QMDh3Pu5Z45YcyZAc/7dNbWNzYLW8Xt0s7u3v5BuXLYNirRhLaI4kp3Q2woZ5K2gAGn3VhTLEJOO+Hkeu53Hqg2TMk7mMY0EHgkWcQIBivd98cY0sfZoD4oV72al8FdJX5OqihHc1BxCv2hIomgEgjHxvR8L4YgxRoY4XRW6ieGxphM8Ij2LJVYUBOkWeKZe2qVoRspbZ8EN1N/bqRYGDMVoZ0UGMZmyZsroBQ3/7raRH+avQSiqyBlMk6ASrLIESXcBeXO23GHTFMCfGoJJprZr7hkjDUmYDtcOjOiMgtYss35v3taJe16zT+v1W8vqo1m3mERHaMTdIZ8dIka6AY1UQsRJNATekYvzqvz5rw7H4vRNSffOUJLcL6+ATuLpVs=</latexit>
ˆ
w
1
<latexit sha1_base64="Lxbow8R0c+1xs7E22DEAKxhlW7A=">AAACJXicdVDLSgMxFE181FpfrS7dDBbBVZmpgi4LblxWsA9ph5JJM21oJhmSO0oZ+hVude3XuBPBlb9iOp2FtXogcDjnXu7JCWLBDbjuJ15b39gsbBW3Szu7e/sH5cph26hEU9aiSijdDYhhgkvWAg6CdWPNSBQI1gkm13O/88C04UrewTRmfkRGkoecErDSfX9MIH2cDbxBuerW3AzOKvFyUkU5moMKLvSHiiYRk0AFMabnuTH4KdHAqWCzUj8xLCZ0QkasZ6kkETN+miWeOadWGTqh0vZJcDL150ZKImOmUWAnIwJjs+TNFVBKmH9dbcI/zV4C4ZWfchknwCRd5AgT4YBy5u04Q64ZBTG1hFDN7VccOiaaULAdLp0ZMZkFLNnmvN89rZJ2vead1+q3F9VGM++wiI7RCTpDHrpEDXSDmqiFKIrQE3pGL/gVv+F3/LEYXcP5zhFaAv76BjnSpVo=</latexit>
X
n
1
<latexit sha1_base64="5PyJLnDyCPNP5QaU9UNjgXw1hXo=">AAACIXicdVBNTwIxFOyiIuIX6NFLIzHxRHbRRI8kXjxi4gIJrKRbutDQbTftWxOy4Td41bO/xpvxZvwzloWDiE7SZDLzXt50wkRwA6776RQ2NreK26Wd8u7e/sFhpXrUNirVlPlUCaW7ITFMcMl84CBYN9GMxKFgnXByM/c7j0wbruQ9TBMWxGQkecQpASv53YH3IAeVmlt3c+B14i1JDS3RGlSdYn+oaBozCVQQY3qem0CQEQ2cCjYr91PDEkInZMR6lkoSMxNkedoZPrPKEEdK2ycB5+rPjYzExkzj0E7GBMZmxZsroJQw/7raRH+avRSi6yDjMkmBSbrIEaUCg8LzZvCQa0ZBTC0hVHP7FUzHRBMKtr+VMyMm84Bl25z3u6d10m7UvYt64+6y1mwtOyyhE3SKzpGHrlAT3aIW8hFFHD2hZ/TivDpvzrvzsRgtOMudY7QC5+sbV5OjTg==</latexit>
X
n
2
<latexit sha1_base64="cT4T726OmWn6zaaJ4NNB1PEw32M=">AAACIXicdVBNTwIxFHyLiohfoEcvjcTEE9lFEz2SePGIiQsksJJu6UJDt920XROy4Td41bO/xpvxZvwzloWDiE7SZDLzXt50woQzbVz30ylsbG4Vt0s75d29/YPDSvWorWWqCPWJ5FJ1Q6wpZ4L6hhlOu4miOA457YSTm7nfeaRKMynuzTShQYxHgkWMYGMlvztoPIhBpebW3RxonXhLUoMlWoOqU+wPJUljKgzhWOue5yYmyLAyjHA6K/dTTRNMJnhEe5YKHFMdZHnaGTqzyhBFUtknDMrVnxsZjrWexqGdjLEZ6xVvrhgpuf7XVTr60+ylJroOMiaS1FBBFjmilCMj0bwZNGSKEsOnlmCimP0KImOsMDG2v5UzIyrygGXbnPe7p3XSbtS9i3rj7rLWbC07LMEJnMI5eHAFTbiFFvhAgMETPMOL8+q8Oe/Ox2K04Cx3jmEFztc3WU6jTw==</latexit>
Y
n
<latexit sha1_base64="XSnzVaaGhA4ZV6IfsJ9+z0eRhH0=">AAACH3icdVDNTgIxGGxREfEP9OilkZh4IrtookcSLx4xyo+BlXRLd2notpu2a0I2PIJXPfs03oxX3say7EFEJ2kymfm+fNPxY860cZw5LGxsbhW3Szvl3b39g8NK9aijZaIIbRPJper5WFPOBG0bZjjtxYriyOe0609uFn73mSrNpHgw05h6EQ4FCxjBxkr3j09iWKk5dScDWiduTmogR2tYhcXBSJIkosIQjrXuu05svBQrwwins/Ig0TTGZIJD2rdU4IhqL82yztCZVUYokMo+YVCm/txIcaT1NPLtZITNWK94C8VIyfW/rtLBn2Y/McG1lzIRJ4YKsswRJBwZiRa9oBFTlBg+tQQTxexXEBljhYmx7a2cCanIApZtc+7vntZJp1F3L+qNu8tas5V3WAIn4BScAxdcgSa4BS3QBgSE4AW8gjf4Dj/gJ/xajhZgvnMMVgDn3xTEoqs=</latexit>
Fig. 1: A network consisting of two encoders that initially
communicate via a CF and then using the information they
receive, transmit their codewords over the MAC to the decoder.
depicts the CF model in the two-user multiple access channel
(MAC) scenario.
The CF enables cooperation between the encoders through
its rate-limited input and output links. Prior to choosing a
codeword to transmit over the channel, each encoder sends
a rate-limited function of its message to the CF. The CF uses
the information it receives from
both
encoders to compute a
rate-limited function for each encoder. It then transmits the
computed values over its output links. Finally, each encoder
selects a codeword using its message and the information it
receives from the CF.
To simplify our discussion in this section, suppose the CF
input link capacities both equal
C
in
and the CF output link
capacities both equal
C
out
. If
C
in
C
out
, then the optimal
strategy for the CF is to simply forward the information it
receives from one encoder to the other. Using the capacity
region of the MAC with conferencing encoders [4], it follows
that the average-error sum-capacity gain of CF cooperation in
this case is bounded from above by
2
C
in
and does not depend
on the precise value of
C
out
C
in
. If
C
in
> C
out
, however,
the situation is more complicated since the CF cannot forward
all of its incoming information. While the
2
C
in
upper bound
still holds, the dependence of the sum-capacity gain on
C
out
is less clear. If the CF simply forwards part of the information
it receives, then again by [4], the average-error sum-capacity
gain is at most
2
C
out
. The
2
C
out
bound has an intuitive
interpretation: it reflects the amount of information the CF
shares with the encoders, perhaps suggesting that the benefit
of sharing information with rate
2
C
out
with the encoders
is at most
2
C
out
. It turns out, though, that a much larger
gain is possible through more sophisticated coding techniques.
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
2
Specifically, in prior work [6, Theorem 3], we show that there
exist MACs such that for any fixed
C
in
>
0
, the average-
error sum-capacity has a derivative in
C
out
that is infinite at
C
out
= 0
; that is, for small
C
out
, the gain resulting from
cooperation exceeds any function that goes through the origin
and has bounded derivative at that point.
The large sum-capacity gain described above is not limited
to the average-error scenario. In fact, in related work [5,
Proposition 5], we show that for any MAC for which, in
the absence of cooperation, the average-error sum-capacity is
strictly greater than the maximal-error sum-capacity, adding a
CF and measuring the
maximal-error
sum-capacity for fixed
C
in
>
0
gives a curve that is discontinuous at
C
out
= 0
.
In this case, we say that “negligible cooperation” results in a
non-negligible capacity benefit.
Given these earlier results, a number of important questions
remain open in both the average- and maximal-error cases. For
example, we wish to quantify the behavior of the average-error
sum-capacity not only at the single point
C
out
= 0
, but rather
for a range of
C
out
values. We also seek to understand, in
the average-error case, whether the sum-capacity gain can be
discontinuous in
C
out
. In the maximal-error case, we would
like to determine whether a constant number of CF output bits
are sufficient to achieve the discontinuities already shown to
be possible when the CF output link capacities tend to zero.
We next provide an overview of the key results in this work,
which make progress towards addressing the above questions.
Departing from the simplified discussion above, in our results
we allow the CF input and output link capacities to differ.
To this end, consider a discrete MAC with a
(
C
in
,
C
out
)
-
CF where
C
in
:
= (
C
1
in
,C
2
in
)
and
C
out
:
= (
C
1
out
,C
2
out
)
. To
capture the dependence on these parameters, we denote the
average- and maximal-error sum-capacities of this network
with
C
sum
(
C
in
,
C
out
)
and
C
sum
,
max
(
C
in
,
C
out
)
, respectively.
Furthermore, let the components of
C
in
= (
C
1
in
,C
2
in
)
be
sufficiently large such that without loss of generality we can
assume that for any
C
out
R
2
0
, a
(
C
in
,
C
out
)
-CF has access
to the messages of both encoders. Our results tackle both
the special case where the CF has full knowledge of all the
messages (i.e.,
C
in
=
C
in
) and the general case where the CF
may or may not have full knowledge (i.e.,
C
in
is arbitrary).
The CF with Full Knowledge:
In this scenario, we
explicitly observe the contrast between average- and maximal-
error sum-capacities, showing that the former is continuous
with respect to the CF output link capacities, while the latter
is not only discontinuous, but also, a constant number of bits
can suffice to achieve that discontinuity.
1) [Continuity of Average-Error Sum-Capacity in
C
out
]
Theorem 3 shows that for any discrete MAC, the mapping
C
out
7→
C
sum
(
C
in
,
C
out
)
defined on
R
2
0
is continuous.
2) [Discontinuity of Maximal-Error Sum-Capacity in
C
out
]
Theorem 4 shows that there exists a discrete MAC where
C
sum
,
max
(
C
in
,k
-bit
)
> C
sum
,
max
(
C
in
,
0
)
.
Here
C
sum
,
max
(
C
in
,k
-bit
)
denotes the maximal-error
sum-capacity of a MAC and a CF that has full knowledge
of the messages and sends at most
k
bits back to the
encoders over the entire blocklength.
The General CF:
In this scenario, we can uniquely
characterize the MACs that give a discontinuity in maximal-
error sum-capacity. The average-error case, however, is more
subtle. Our results for the average-error case include a lower
bound on the sum-capacity gain. The upper bound, which is
needed to establish continuity, proves to be more difficult and
remains open in the general case.
1) [Square Root Lower Bound for Average-Error Sum-
Capacity] Theorem 5 demonstrates that for a nonempty
class of discrete MACs and any
(
C
in
,
C
out
)
R
2
>
0
×
R
2
>
0
, the mapping
h
7→
C
sum
(
C
in
,h
C
out
)
,
grows at least as fast as
h
in the neighborhood of
h
=
0
+
.
2) [Continuity
of
Average-Error
Sum-Capacity
in
(
C
in
,
C
out
)
] Theorem 6 provides sufficient conditions
under
which
the
average-error
sum-capacity
is
continuous in the CF link capacities. Specifically,
for any discrete MAC,
C
sum
(
̃
C
in
,
̃
C
out
)
is continuous
at
(
̃
C
in
,
̃
C
out
) = (
C
in
,
C
out
)
if any of the following
conditions hold:
a) The CF and both encoders communicate:
C
i
in
>
0
and
C
i
out
>
0
for
i
∈{
1
,
2
}
.
b) The CF does not send information to either encoder:
C
1
out
= 0
and
C
2
out
= 0
.
c) Both encoders send their full messages to the CF:
C
1
in
C
1
in
and
C
2
out
C
2
out
.
d) At most one of the encoders sends its message to the
CF:
C
1
in
= 0
or
C
2
in
= 0
.
3) [Discontinuity of Maximal-Error Sum-Capacity in
C
out
:
Necessary and Sufficient Condition] Theorem 8 states that
for any discrete MAC and fixed
C
in
R
2
>
0
, the mapping
C
out
7→
C
sum
,
max
(
C
in
,
C
out
)
is not continuous if and only if
C
sum
(
0
,
0
)
> C
sum
,
max
(
0
,
0
)
.
(1)
The remainder of the paper is organized as follows. In
Section II, we discuss related work from the literature. In
Section III, we present a precise description of our model. In
Section IV, we review prior results on the sum-capacity gain
of cooperation via a CF using the definitions from Section
III. We state our contributions in Section V. As some our
results require lengthy proofs, prior to providing all the details
in Section IX, we present outlines of the main arguments in
Sections VI, VII, and VIII. We give a summary of this work
in Section X.
II. R
ELATED WORK
A continuity problem similar to the one considered here
appears in studying rate-limited feedback over the MAC. In
that setting, Sarwate and Gastpar [7] use the dependence-
balance bounds of Hekstra and Willems [8] to show that as
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
3
the feedback rate converges to zero, the average-error capacity
region converges to the average-error capacity region of the
same MAC in the absence of feedback.
The problem we study here can also be formulated as an
“edge removal problem” as introduced by Ho, Effros, and
Jalali [9], [10]. The edge removal problem seeks to quantify
the capacity effect of removing a single edge from a network.
While bounds on this capacity impact exist in a number
of limited scenarios (see, for example, [9] and [10]), the
problem remains open in the general case. In the context of
network coding, Langberg and Effros show that this problem
is connected to a number of other open problems, including
the difference between the
0
-error and

-error network coding
capacity regions [11] and the difference between the lossless
source coding regions for independent and dependent sources
[12].
In [13], Kosut and Kliewer present different variations of the
edge removal problem in a unified setting. In their terminology,
one of the problems the present work investigates is whether
the network consisting of a MAC and a CF satisfies the
“weak edge removal property” with respect to the average-
error reliability criterion. A discussion in [14, Chapter 1]
summarizes the known results for each variation of the edge
removal problem as well.
The question of whether the capacity region of a network
consisting of noiseless links is continuous with respect to the
link capacities is studied by Gu, Effros, and Bakshi [15] and
Chan and Grant [16]. The present work differs from [15], [16]
in the network under consideration; while our network does
have noiseless links (the CF input and output links), it also
contains a multiterminal component (the MAC) which may
exhibit interference or noise; no such component appears in
[15], [16].
For the maximal-error case, our study focuses on the
effect of a constant number of bits of communication in the
memoryless setting. For noisy networks
with memory
, it is
not difficult to see that even one bit of communication may
indeed affect the capacity region. For example, consider a
binary symmetric channel whose error probability
θ
is chosen
at random and then fixed for all time. If for
i
∈{
1
,
2
}
,
θ
equals
θ
i
with positive probability
p
i
, and
0
θ
1
< θ
2
1
/
2
, then a
single bit of feedback (not rate 1, but exactly one bit no matter
how large the blocklength) from the receiver to the transmitter
suffices to increase the capacity. For memoryless channels, the
question is far more subtle and is the subject of our study.
In the next section, we present the cooperation model we
consider in this work.
III. T
HE
C
OOPERATION
F
ACILITATOR
M
ODEL
In this work, we study cooperation between two encoders
that communicate their messages to a decoder over a
stationary, memoryless, and discrete MAC. Such a MAC can
be represented by the triple
(
X
1
×X
2
,p
(
y
|
x
1
,x
2
)
,
Y
)
,
where
X
1
,
X
2
, and
Y
are finite sets and
p
(
y
|
x
1
,x
2
)
is a
conditional probability mass function. For any positive integer
n
2
, the
n
th extension of this MAC is given by
p
(
y
n
|
x
n
1
,x
n
2
)
:
=
n
t
=1
p
(
y
t
|
x
1
t
,x
2
t
)
.
For each positive integer
n
, called the blocklength, and
nonnegative real numbers
R
1
and
R
2
, called the rates, we
next define a
(2
nR
1
,
2
nR
2
,n
)
-code for communication over
a MAC with a
(
C
in
,
C
out
)
-CF. Here
C
in
= (
C
1
in
,C
2
in
)
and
C
out
= (
C
1
out
,C
2
out
)
represent the capacities of the CF input
and output links, respectively. (See Figure 1.)
A. Positive Rate Cooperation
For every
x
1
, let
[
x
]
denote the set
{
1
,...,
b
x
c}
. For
i
{
1
,
2
}
, the transmission of encoder
i
to the CF is represented
by a mapping
φ
i
: [2
nR
i
]
[2
nC
i
in
]
.
The CF uses the information it receives from the encoders to
compute a function
ψ
i
: [2
nC
1
in
]
×
[2
nC
2
in
]
[2
nC
i
out
]
for encoder
i
, where
i
∈{
1
,
2
}
. Encoder
i
uses its message and
what it receives from the CF to select a codeword according
to
f
i
: [2
nR
i
]
×
[2
nC
i
out
]
→X
n
i
.
The decoder finds estimates of the transmitted messages using
the channel output. It is represented by a mapping
g
:
Y
n
[2
nR
1
]
×
[2
nR
2
]
.
The collection of mappings
(
φ
1
2
1
2
,f
1
,f
2
,g
)
defines a
(2
nR
1
,
2
nR
2
,n
)
-code for the MAC with a
(
C
in
,
C
out
)
-CF.
B. Constant Size Cooperation
To address the setting of a constant number of cooperation
bits, we modify the output link of the CF to have support
[2
k
]
for some fixed integer
k
; unlike the prior support
[2
nC
i
out
]
,
the support of this link is independent of the blocklength
n
.
Then, for
i
∈{
1
,
2
}
, the transmission of encoder
i
to the CF
is represented by a mapping
φ
i
: [2
nR
i
]
[2
nC
i
in
]
.
The CF uses the information it receives from the encoders to
compute a function
ψ
i
: [2
nC
1
in
]
×
[2
nC
2
in
]
[2
k
]
for encoder
i
, where
i
∈{
1
,
2
}
. Encoder
i
, as before, uses its
message and what it receives from the CF to select a codeword
according to
f
i
: [2
nR
i
]
×
[2
k
]
→X
n
i
.
We now say that
(
φ
1
2
1
2
,f
1
,f
2
,g
)
defines a
(2
nR
1
,
2
nR
2
,n
)
-code for the MAC with a
(
C
in
,k
-bit
)
-CF.
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
4
C. Capacity Region
For a fixed code, the probability of decoding a particular
transmitted message pair
(
w
1
,w
2
)
incorrectly is given by
λ
n
(
w
1
,w
2
)
:
=
y
n
:
g
(
y
n
)
6
=(
w
1
,w
2
)
p
(
y
n
|
f
1
(
w
1
,z
1
)
,f
2
(
w
2
,z
2
))
,
where
z
1
and
z
2
are the CF outputs and are calculated, for
i
∈{
1
,
2
}
, according to
z
i
=
ψ
i
(
φ
1
(
w
1
)
2
(
w
2
)
)
.
The
average
probability of error is defined as
P
(
n
)
e,
avg
:
=
1
2
n
(
R
1
+
R
2
)
w
1
,w
2
λ
n
(
w
1
,w
2
)
,
and the
maximal
probability of error is given by
P
(
n
)
e,
max
:
= max
w
1
,w
2
λ
n
(
w
1
,w
2
)
.
A rate pair
(
R
1
,R
2
)
is achievable with respect to the
average-error reliability criterion if there exists an infinite
sequence of
(2
nR
1
,
2
nR
2
,n
)
-codes such that
P
(
n
)
e,
avg
0
as
n
→ ∞
. The average-error capacity region of a MAC with
a
(
C
in
,
C
out
)
-CF, denoted by
C
avg
(
C
in
,
C
out
)
, is the closure
of the set of all rate pairs that are achievable with respect to
the average-error reliability criterion. The average-error sum-
capacity is defined as
C
sum
(
C
in
,
C
out
)
:
=
max
(
R
1
,R
2
)
C
avg
(
C
in
,
C
out
)
(
R
1
+
R
2
)
.
By replacing
P
(
n
)
e,
avg
with
P
(
n
)
e,
max
, we can similarly define
achievable rates with respect to the maximal-error reliability
criterion, the maximal-error capacity region, and the maximal-
error sum-capacity. For a MAC with a
(
C
in
,
C
out
)
-CF, we
denote the maximal-error capacity region and sum-capacity
by
C
max
(
C
in
,
C
out
)
and
C
sum
,
max
(
C
in
,
C
out
)
, respectively.
IV. P
RIOR
R
ESULTS ON THE
S
UM
-C
APACITY
G
AIN OF
C
OOPERATION
We next review a number of results from [5], [6] which
describe the sum-capacity gain of cooperation under the CF
model. We begin with the average-error case.
Consider a discrete MAC
(
X
1
× X
2
,p
(
y
|
x
1
,x
2
)
,
Y
)
. Let
p
ind
(
x
1
,x
2
) =
p
ind
(
x
1
)
p
ind
(
x
2
)
be a distribution that satisfies
I
ind
(
X
1
,X
2
;
Y
)
:
=
I
(
X
1
,X
2
;
Y
)
p
ind
(
x
1
,x
2
)
=
max
p
(
x
1
)
p
(
x
2
)
I
(
X
1
,X
2
;
Y
);
(2)
subscript “ind” here denotes the independence of the
codewords from encoders 1 and 2 in the absence of
cooperation. Next, for an arbitrary distribution
p
dep
(
x
1
,x
2
)
,
define
I
dep
(
X
1
,X
2
;
Y
)
:
=
I
(
X
1
,X
2
;
Y
)
p
dep
(
x
1
,x
2
)
.
Furthermore, let
p
ind
(
y
)
and
p
dep
(
y
)
be given by
p
ind
(
y
)
:
=
x
1
,x
2
p
ind
(
x
1
,x
2
)
p
(
y
|
x
1
,x
2
)
p
dep
(
y
)
:
=
x
1
,x
2
p
dep
(
x
1
,x
2
)
p
(
y
|
x
1
,x
2
)
,
respectively. For the distributions
p
ind
(
y
)
and
p
dep
(
y
)
, we
denote their KL divergence by
D
(
p
dep
(
y
)
p
ind
(
y
)
)
:
=
y
p
dep
(
y
) log
p
dep
(
y
)
p
ind
(
y
)
.
Now let
C
be the class of all discrete MACs
(
X
1
×
X
2
,p
(
y
|
x
1
,x
2
)
,
Y
)
such that for a distribution
p
ind
(
x
1
,x
2
) =
p
ind
(
x
1
)
p
ind
(
x
2
)
that satisfies (2), there exists another
distribution
p
dep
(
x
1
,x
2
)
such that the support of
p
dep
(
x
1
,x
2
)
is contained in the support of
p
ind
(
x
1
)
p
ind
(
x
2
)
, and
1
I
dep
(
X
1
,X
2
;
Y
) +
D
(
p
dep
(
y
)
p
ind
(
y
)
)
> I
ind
(
X
1
,X
2
;
Y
)
.
(3)
Prior to discussing the significance of the class
C
, we
provide some examples. Consider a MAC with binary inputs
X
1
,X
2
∈ {
0
,
1
}
. If
Y
=
X
1
X
2
, where
is addition
mod 2 and
Y
∈ {
0
,
1
}
, then the resulting MAC is
not
in
C
.
However, if
Y
=
X
1
+
X
2
, where
+
is integer addition and
Y
∈{
0
,
1
,
2
}
, then the resulting MAC
is
in
C
.
Theorem 1 below, which demonstrates the significance of
C
, is a special case of [6, Theorem 3] for the two-user case.
2
Theorem 1.
For any MAC in
C
and any
(
C
in
,
v
)
R
2
>
0
×
R
2
>
0
,
lim
h
0
+
C
sum
(
C
in
,h
v
)
C
sum
(
C
in
,
0
)
h
=
.
We next describe the maximal-error sum-capacity gain.
While it is possible in the average-error scenario to achieve
a sum-capacity that has an infinite slope, a stronger result
is known in the maximal-error case. There exists a class of
MACs for which the maximal-error sum-capacity exhibits a
discontinuity in the capacities of the CF output links. This is
stated formally in the next proposition, which is a special case
of [5, Proposition 5]. The proposition relies on the existence
of a discrete MAC with average-error sum-capacity larger than
its maximal-error sum-capacity; that existence was first proven
by Dueck [17]. We investigate further properties of Dueck’s
MAC in [5, Subsection VI-E].
Proposition 2.
Consider a discrete MAC for which
C
sum
(
0
,
0
)
> C
sum
,
max
(
0
,
0
)
.
(4)
Fix
C
in
R
2
>
0
. Then
C
sum
,
max
(
C
in
,
C
out
)
is not continuous
at
C
out
=
0
.
We next present the main results of this work.
V. O
UR
R
ESULTS ON
A
VERAGE
-
AND
M
AXIMAL
-E
RROR
S
UM
-C
APACITIES
As discussed in Section I, we present our results for the
large
C
in
(i.e., CF has full knowledge of the messages) and
arbitrary
C
in
cases separately. We begin with the former case.
1
As kindly demonstrated by one of the reviewers, the condition in (3) can
be simplified to
I
dep
(
X
1
,X
2
;
Y
)
> I
ind
(
X
1
,X
2
;
Y
)
. We provide their
proof in the Appendix.
2
Note that Theorem 1 does not lead to any conclusions regarding continuity;
a function
f
(
x
)
with infinite derivative at
x
= 0
can be continuous (e.g.,
f
(
x
) =
x
) or discontinuous (e.g.,
f
(
x
) =
d
x
e
)
.
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
5
A. CF with Full Knowledge of the Messages
Given a discrete MAC
(
X
1
× X
2
,p
(
y
|
x
1
,x
2
)
,
Y
)
, let the
components of
C
in
= (
C
1
in
,C
2
in
)
be sufficiently large so
that any CF with input link capacities
C
1
in
and
C
2
in
has full
knowledge of the encoders’ messages. For example, we can
choose
C
in
such that
min
{
C
1
in
,C
2
in
}
>
max
p
(
x
1
,x
2
)
I
(
X
1
,X
2
;
Y
)
.
Our first result, Theorem 3, addresses the continuity of
C
sum
(
C
in
,
C
out
)
as a function of
C
out
over
R
2
0
. We provide
an outline of the proof in Section VI. We also remark that
while in this work we only consider discrete MACs, we expect
similar ideas, when combined with techniques developed by
Fong and Tan [18] that establish the strong converse for the
Gaussian MAC, to extend to Gaussian MACs with a CF as
well. The problem of continuity in the Gaussian case is left
for future work.
Theorem 3.
For any discrete MAC, the mapping
C
out
7→
C
sum
(
C
in
,
C
out
)
defined on
R
2
0
is continuous.
In words, Theorem 3 states that the average-error sum-
capacity is continuous with respect to the CF output link
capacities, a result that stands in stark contrast to the maximal-
error case which we present next.
The second contribution of our work in the case where the
CF has full knowledge of the messages is the discontinuity
of the maximal-error sum-capacity when the CF can send
only a constant number of bits back to the encoders. For
this setting, we use Dueck’s deterministic memoryless MAC
[17], which we next define. Consider the deterministic MAC
(
X
1
× X
2
,p
Dueck
(
y
|
x
1
,x
2
)
,
Y
)
where
X
1
=
{
a,b,A,B
}
,
X
2
=
{
0
,
1
}
, and
Y
=
{
a,b,c,A,B,C
}×{
0
,
1
}
. Furthermore,
p
Dueck
(
y
|
x
1
,x
2
)
equals one whenever
y
=
W
(
x
1
,x
2
)
for the
mapping
W
:
X
1
×X
2
→ Y
and equals zero otherwise. The
mapping
W
is defined as
W
(
x
1
,x
2
)
:
=
(
c,
0)
if
(
x
1
,x
2
)
∈{
(
a,
0)
,
(
b,
0)
}
(
C,
1)
if
(
x
1
,x
2
)
∈{
(
A,
1)
,
(
B,
1)
}
(
x
1
,x
2
)
otherwise.
Theorem 4.
For
k
= 5
, the discrete MAC
(
X
1
×
X
2
,p
Dueck
(
y
|
x
1
,x
2
)
,
Y
)
satisfies
C
sum
,
max
(
C
in
,k
-
bit
)
> C
sum
,
max
(
C
in
,
0
)
.
(5)
We outline the proof of Theorem 4 in Section VII. We
provide detailed proofs of our claims in Section IX.
B. CF with Arbitrary Knowledge of the Messages
In this subsection, we do not assume the CF has full
knowledge of the messages and instead consider the general
case. Our first result extends Theorem 1 by characterizing the
growth rate of the MAC with CF average-error sum-capacity
as a function of the CF output link capacities. We remark that a
similar result to Theorem 5 below also holds for the Gaussian
MAC [6, Prop. 9].
Theorem 5.
Let
(
X
1
×X
2
,p
(
y
|
x
1
,x
2
)
,
Y
)
be a MAC in
C
,
and suppose
(
C
in
,
v
)
R
2
>
0
×
R
2
>
0
. Then
lim inf
h
0
+
C
sum
(
C
in
,h
v
)
C
sum
(
C
in
,
0
)
h
>
0
.
(6)
In words, Theorem 5 states that for any MAC in
C
, the
average-error sum-capacity of that MAC with a CF grows at
least as fast as the square root of the CF output link capacities.
We prove this theorem in Subsection IX-A.
We next describe the second result concerning average-
error sum-capacity. While our first result, Theorem 5
above, concerns the growth rate of
C
sum
(
C
in
,
C
out
)
near
C
out
=
0
over
R
2
0
, Theorem 6 proves the continuity of
C
sum
(
C
in
,
C
out
)
over several subsets of
R
2
0
×
R
2
0
. We give
the proof of this result in Section VIII.
Theorem 6.
For any discrete MAC, consider the mapping
(
̃
C
in
,
̃
C
out
)
7→
C
sum
(
̃
C
in
,
̃
C
out
)
,
defined on
R
2
0
×
R
2
0
. This mapping is continuous at
(
̃
C
in
,
̃
C
out
) = (
C
in
,
C
out
)
if any of the following conditions
hold:
a)
C
i
in
>
0
and
C
i
out
>
0
for
i
∈{
1
,
2
}
.
b)
C
1
out
= 0
and
C
2
out
= 0
.
c)
C
1
in
C
1
in
and
C
2
out
C
2
out
.
d)
C
1
in
= 0
or
C
2
in
= 0
.
Note that Theorem 6 leaves open the continuity of the
average-error sum-capacity in one case. If
C
1
in
and
C
2
in
are
positive but not sufficiently large and
C
1
out
>
0
, then we
have not established the continuity of the sum-capacity at
C
2
out
= 0
+
. (Clearly, the symmetric case where
C
2
out
>
0
and
C
1
out
0
+
remains open as well.) This last scenario remains
a subject for future work. The corollary below, however,
demonstrates that to prove the continuity of the average-error
sum-capacity
C
sum
at these remaining points, we may fix
C
in
and one of
C
1
out
or
C
2
out
, meaning that we only need to show
the continuity of
C
sum
in a single variable. We prove this
corollary in Subsection IX-B.
Corollary 7.
For any discrete MAC, to prove that the mapping
(
C
in
,
C
out
)
7→
C
sum
(
C
in
,
C
out
)
,
defined on
R
2
0
×
R
2
0
is continuous, it suffices to show that
for all
(
C
in
,
C
out
)
R
2
>
0
×
R
2
>
0
, we have
C
sum
(
C
in
,
(
̃
C
1
out
,C
2
out
)
)
C
sum
(
C
in
,
(0
,C
2
out
)
)
as
̃
C
1
out
0
+
and
C
sum
(
C
in
,
(
C
1
out
,
̃
C
2
out
)
)
C
sum
(
C
in
,
(
C
1
out
,
0)
)
as
̃
C
2
out
0
+
.
Finally, we describe our result for the maximal-error sum-
capacity for an arbitrary CF. Recall that Proposition 2 gives a
sufficient condition under which
C
sum
,
max
(
C
in
,
C
out
)
is not
continuous at
C
out
=
0
for a fixed
C
in
R
2
>
0
. In Subsection
IX-C, we show that the sufficient condition is also necessary.
This is stated in the next theorem.
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.
0018-9448 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIT.2021.3093891, IEEE
Transactions on Information Theory
6
Theorem 8.
Fix a discrete MAC and
C
in
R
2
>
0
. Then
C
sum
,
max
(
C
in
,
C
out
)
is not continuous at
C
out
=
0
if and
only if
C
sum
(
0
,
0
)
> C
sum
,
max
(
0
,
0
)
.
(7)
VI. A
VERAGE
-E
RROR
S
UM
-C
APACITY
C
ONTINUITY
: CF
WITH
F
ULL
K
NOWLEDGE
In this section, we give an outline for the proof of Theorem
3. We start our study of the continuity of
C
sum
(
C
in
,
C
out
)
by
presenting lower and upper bounds in terms of an auxiliary
function
σ
(
δ
)
defined for
δ
0
(Lemma 10). This function is
similar to a tool used by Dueck [19] but is slightly different
in its reliance on a time-sharing random variable denoted by
U
. The random variable
U
plays two roles. First it ensures
that
σ
is concave, which immediately proves the continuity
of
σ
over
R
>
0
. Second, together with a lemma from [19]
(Lemma 13 below), it helps us find a single-letter upper bound
for
σ
(Corollary 14). We then use that upper bound to prove
continuity at
δ
= 0
. We remark that Dueck’s technique [19] is
used by a number of other authors as well in related contexts,
including Saeedi Bidokhti and Kramer [20] and Kosut and
Kliewer [13, Proposition 15]. We first use this technique for
the continuity problem in [14, Appendix C].
The following definitions are useful for the description of
our lower and upper bounds for
C
sum
(
C
in
,
C
out
)
. For every
finite alphabet
U
and all
δ
0
, define the set of probability
mass functions
P
(
n
)
U
(
δ
)
on
U ×X
n
1
×X
n
2
as
P
(
n
)
U
(
δ
) :=
{
p
(
u,x
n
1
,x
n
2
)
I
(
X
n
1
;
X
n
2
|
U
)
}
.
Intuitively,
P
(
n
)
U
(
δ
)
captures a family of “mildly dependent”
input distributions for our MAC; this mild dependence
is parametrized by a bound
δ
on the per-symbol mutual
information. In the discussion that follows, we relate
δ
to the
amount of information that the CF shares with the encoders.
For every positive integer
n
, let
σ
n
:
R
0
R
0
denote the
function
3
σ
n
(
δ
)
:
= sup
U
max
p
∈P
(
n
)
U
(
δ
)
1
n
I
(
X
n
1
,X
n
2
;
Y
n
|
U
)
,
(8)
where the supremum is over all finite sets
U
. Thus
σ
n
(
δ
)
captures something like the maximal sum-rate achievable
under the mild dependence described above. As we see in
Lemma 12, conditioning on the random variable
U
in (8)
ensures that
σ
n
is concave.
For every
δ
0
,
(
σ
n
(
δ
))
n
=1
satisfies a superadditivity
property which appears in Lemma 9, below. Intuitively, this
property says that the sum-rate of the best code of blocklength
m
+
n
is bounded from below by the sum-rate of the
concatenation of the best codes of blocklengths
m
and
n
. We
prove this Lemma in Subsection IX-D.
Lemma 9.
For all
m,n
1
, all
δ
0
, and
σ
n
(
δ
)
defined as
in (8), we have
(
m
+
n
)
σ
m
+
n
(
δ
)
m
(
δ
) +
n
(
δ
)
.
3
For
n
= 1
, this function also appears in the study of the MAC with
negligible feedback [7].
Given Lemma 9, [21, Appendix 4A, Lemma 2] now implies
that the sequence of mappings
(
σ
n
)
n
=1
converges pointwise
to some mapping
σ
:
R
0
R
0
, and
σ
(
δ
)
:
= lim
n
→∞
σ
n
(
δ
) = sup
n
σ
n
(
δ
)
.
(9)
We next present our lower and upper bounds for
C
sum
(
C
in
,
C
out
)
in terms of
σ
. The lower bound follows
directly from [6, Corollary 8]. We prove the upper bound in
Subsection IX-E.
Lemma 10.
For any discrete MAC and any
C
out
R
2
0
, we
have
σ
(
C
1
out
+
C
2
out
)
min
{
C
1
out
,C
2
out
}≤
C
sum
(
C
in
,
C
out
)
σ
(
C
1
out
+
C
2
out
)
.
From the remark following Theorem 6, we only need to
prove that
C
sum
(
C
in
,
C
out
)
is continuous on the boundary of
R
2
0
. On the boundary of
R
2
0
, however,
min
{
C
1
out
,C
2
out
}
=
0
. Thus it suffices to show that
σ
is continuous on
R
0
, which
is proven in the next lemma.
Lemma 11.
For any finite alphabet MAC, the function
σ
,
defined by (9), is continuous on
R
0
.
To prove Lemma 11, we first consider the continuity of
σ
on
R
>
0
and then focus on the point
δ
= 0
+
. Note that
σ
is the
pointwise limit of the sequence of functions
(
σ
n
)
n
=1
. Lemma
12 uses a time-sharing argument as in [22] to show that each
σ
n
is concave. (See Subsection IX-F for the proof.) Therefore,
σ
is concave as well, and since
R
>
0
is open,
σ
is continuous
on
R
>
0
.
Lemma 12.
For all
n
1
,
σ
n
is concave on
R
0
.
To prove the continuity of
σ
(
δ
)
at
δ
= 0
, we find an
upper bound for
σ
in terms of
σ
1
. For some finite set
U
and
δ >
0
, consider a distribution
p
(
u,x
n
1
,x
n
2
)
∈P
(
n
)
U
(
δ
)
. By the
definition of
P
(
n
)
U
(
δ
)
,
I
(
X
n
1
;
X
n
2
|
U
)
nδ.
(10)
Finding a bound for
σ
in terms of
σ
1
requires a single-letter
version of (10). In [19, Subsection II.D], Dueck presents the
necessary result. We present Dueck’s result in the next lemma
and provide the proof in Subsection IX-G for completeness.
Lemma 13
(Dueck’s Lemma [19])
.
Fix positive real numbers

and
δ
, positive integer
n
, and finite alphabet
U
. If
p
P
(
n
)
U
(
δ
)
, then there exists a set
T
[
n
]
satisfying
|
T
|≤
nδ/
such that
t /
T
:
I
(
X
1
t
;
X
2
t
|
U,X
T
1
,X
T
2
)
,
where for
i
∈{
1
,
2
}
,
X
T
i
:
= (
X
it
)
t
T
.
Corollary 14 uses Lemma 13 to find an upper bound for
σ
in terms of
σ
1
. The proof of this corollary (see Subsection
IX-H) combines ideas from [19] with results derived here.
Corollary 14.
For all
,δ >
0
, we have
σ
(
δ
)
δ

log
|X
1
||X
2
|
+
σ
1
(

)
.
Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on July 16,2021 at 18:47:30 UTC from IEEE Xplore. Restrictions apply.